I0122 12:56:52.000750 9 e2e.go:243] Starting e2e run "6dee1230-77c7-42dc-9023-44bbcaf16993" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579697810 - Will randomize all specs Will run 215 of 4412 specs Jan 22 12:56:52.534: INFO: >>> kubeConfig: /root/.kube/config Jan 22 12:56:52.541: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 22 12:56:52.576: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 22 12:56:52.613: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 22 12:56:52.613: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 22 12:56:52.613: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 22 12:56:52.627: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 22 12:56:52.627: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 22 12:56:52.627: INFO: e2e test version: v1.15.7 Jan 22 12:56:52.645: INFO: kube-apiserver version: v1.15.1 [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 12:56:52.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Jan 22 12:56:52.726: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 22 12:56:52.743: INFO: Waiting up to 5m0s for pod "downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe" in namespace "downward-api-3838" to be "success or failure" Jan 22 12:56:52.757: INFO: Pod "downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 14.122017ms Jan 22 12:56:54.764: INFO: Pod "downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021557583s Jan 22 12:56:56.780: INFO: Pod "downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036933959s Jan 22 12:56:58.795: INFO: Pod "downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051777744s Jan 22 12:57:00.808: INFO: Pod "downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064809666s Jan 22 12:57:02.821: INFO: Pod "downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078014045s Jan 22 12:57:04.828: INFO: Pod "downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.085308602s STEP: Saw pod success Jan 22 12:57:04.828: INFO: Pod "downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe" satisfied condition "success or failure" Jan 22 12:57:04.832: INFO: Trying to get logs from node iruya-node pod downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe container dapi-container: STEP: delete the pod Jan 22 12:57:04.880: INFO: Waiting for pod downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe to disappear Jan 22 12:57:04.889: INFO: Pod downward-api-bba49b6f-59f6-4913-971a-e4047c7f00fe no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 12:57:04.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3838" for this suite. Jan 22 12:57:10.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 12:57:11.046: INFO: namespace downward-api-3838 deletion completed in 6.152526881s • [SLOW TEST:18.401 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 12:57:11.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jan 22 12:57:11.267: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 22 12:57:11.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5202' Jan 22 12:57:14.115: INFO: stderr: "" Jan 22 12:57:14.115: INFO: stdout: "service/redis-slave created\n" Jan 22 12:57:14.117: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 22 12:57:14.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5202' Jan 22 12:57:14.522: INFO: stderr: "" Jan 22 12:57:14.522: INFO: stdout: "service/redis-master created\n" Jan 22 12:57:14.523: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 22 12:57:14.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5202' Jan 22 12:57:14.913: INFO: stderr: "" Jan 22 12:57:14.913: INFO: stdout: "service/frontend created\n" Jan 22 12:57:14.915: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 22 12:57:14.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5202' Jan 22 12:57:15.276: INFO: stderr: "" Jan 22 12:57:15.276: INFO: stdout: "deployment.apps/frontend created\n" Jan 22 12:57:15.277: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 22 12:57:15.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5202' Jan 22 12:57:15.721: INFO: stderr: "" Jan 22 12:57:15.721: INFO: stdout: "deployment.apps/redis-master created\n" Jan 22 12:57:15.722: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 22 12:57:15.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5202' Jan 22 12:57:16.036: INFO: stderr: "" Jan 22 12:57:16.036: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jan 22 12:57:16.036: INFO: Waiting for all frontend pods to be Running. Jan 22 12:57:41.088: INFO: Waiting for frontend to serve content. Jan 22 12:57:41.158: INFO: Trying to add a new entry to the guestbook. Jan 22 12:57:41.201: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 22 12:57:41.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5202' Jan 22 12:57:41.454: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 12:57:41.454: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 22 12:57:41.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5202' Jan 22 12:57:41.669: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 12:57:41.669: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 22 12:57:41.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5202' Jan 22 12:57:41.815: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 12:57:41.815: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 22 12:57:41.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5202' Jan 22 12:57:42.046: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 12:57:42.046: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 22 12:57:42.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5202' Jan 22 12:57:42.245: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 12:57:42.245: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 22 12:57:42.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5202' Jan 22 12:57:42.568: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 12:57:42.568: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 12:57:42.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5202" for this suite. Jan 22 12:58:28.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 12:58:28.745: INFO: namespace kubectl-5202 deletion completed in 46.166992459s • [SLOW TEST:77.697 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 12:58:28.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 22 12:58:28.811: INFO: Waiting up to 5m0s for pod "pod-821a6e14-23c3-486d-9f1c-888b02a30cc8" in namespace "emptydir-3521" to be "success or failure" Jan 22 12:58:28.901: INFO: Pod "pod-821a6e14-23c3-486d-9f1c-888b02a30cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 90.152711ms Jan 22 12:58:30.914: INFO: Pod "pod-821a6e14-23c3-486d-9f1c-888b02a30cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102892511s Jan 22 12:58:32.929: INFO: Pod "pod-821a6e14-23c3-486d-9f1c-888b02a30cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117993054s Jan 22 12:58:34.944: INFO: Pod "pod-821a6e14-23c3-486d-9f1c-888b02a30cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133419434s Jan 22 12:58:36.957: INFO: Pod "pod-821a6e14-23c3-486d-9f1c-888b02a30cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145837724s Jan 22 12:58:38.974: INFO: Pod "pod-821a6e14-23c3-486d-9f1c-888b02a30cc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163039592s STEP: Saw pod success Jan 22 12:58:38.974: INFO: Pod "pod-821a6e14-23c3-486d-9f1c-888b02a30cc8" satisfied condition "success or failure" Jan 22 12:58:38.980: INFO: Trying to get logs from node iruya-node pod pod-821a6e14-23c3-486d-9f1c-888b02a30cc8 container test-container: STEP: delete the pod Jan 22 12:58:39.090: INFO: Waiting for pod pod-821a6e14-23c3-486d-9f1c-888b02a30cc8 to disappear Jan 22 12:58:39.094: INFO: Pod pod-821a6e14-23c3-486d-9f1c-888b02a30cc8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 12:58:39.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3521" for this suite. Jan 22 12:58:45.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 12:58:45.275: INFO: namespace emptydir-3521 deletion completed in 6.175987474s • [SLOW TEST:16.529 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 12:58:45.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 12:58:45.353: INFO: Creating deployment "test-recreate-deployment" Jan 22 12:58:45.369: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 22 12:58:45.525: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 22 12:58:45.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"test-recreate-deployment-6df85df6b9\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Jan 22 12:58:47.557: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 12:58:49.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 12:58:51.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 12:58:53.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715294725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 12:58:55.551: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 22 12:58:55.570: INFO: Updating deployment test-recreate-deployment Jan 22 12:58:55.570: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 22 12:58:56.017: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-1138,SelfLink:/apis/apps/v1/namespaces/deployment-1138/deployments/test-recreate-deployment,UID:b7b23449-23a9-4f70-9e09-28a0b4536c50,ResourceVersion:21429628,Generation:2,CreationTimestamp:2020-01-22 12:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-22 12:58:55 +0000 UTC 2020-01-22 12:58:55 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-22 12:58:55 +0000 UTC 2020-01-22 12:58:45 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 22 12:58:56.081: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-1138,SelfLink:/apis/apps/v1/namespaces/deployment-1138/replicasets/test-recreate-deployment-5c8c9cc69d,UID:072979b1-6d92-493c-9137-0ce92625f27c,ResourceVersion:21429626,Generation:1,CreationTimestamp:2020-01-22 12:58:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b7b23449-23a9-4f70-9e09-28a0b4536c50 0xc002746997 0xc002746998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 22 12:58:56.081: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 22 12:58:56.082: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-1138,SelfLink:/apis/apps/v1/namespaces/deployment-1138/replicasets/test-recreate-deployment-6df85df6b9,UID:de206187-f351-40f4-8505-a42577d6d255,ResourceVersion:21429616,Generation:2,CreationTimestamp:2020-01-22 12:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b7b23449-23a9-4f70-9e09-28a0b4536c50 0xc002746a67 0xc002746a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 22 12:58:56.110: INFO: Pod "test-recreate-deployment-5c8c9cc69d-8vm79" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-8vm79,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-1138,SelfLink:/api/v1/namespaces/deployment-1138/pods/test-recreate-deployment-5c8c9cc69d-8vm79,UID:60d1ddff-e951-4dc5-99fd-38472e5e2e41,ResourceVersion:21429629,Generation:0,CreationTimestamp:2020-01-22 12:58:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 072979b1-6d92-493c-9137-0ce92625f27c 0xc002747377 0xc002747378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-77lt4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-77lt4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-77lt4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027473f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002747410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:58:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:58:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:58:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:58:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-22 12:58:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 12:58:56.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1138" for this suite. Jan 22 12:59:04.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 12:59:04.299: INFO: namespace deployment-1138 deletion completed in 8.180814269s • [SLOW TEST:19.024 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 12:59:04.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 12:59:11.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8358" for this suite. Jan 22 12:59:17.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 12:59:17.231: INFO: namespace namespaces-8358 deletion completed in 6.214406071s STEP: Destroying namespace "nsdeletetest-1681" for this suite. Jan 22 12:59:17.234: INFO: Namespace nsdeletetest-1681 was already deleted STEP: Destroying namespace "nsdeletetest-6941" for this suite. Jan 22 12:59:23.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 12:59:23.422: INFO: namespace nsdeletetest-6941 deletion completed in 6.187840162s • [SLOW TEST:19.121 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 12:59:23.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-ac79b7be-53ec-4bcd-bab0-7b0591b24b64 STEP: Creating a pod to test consume configMaps Jan 22 12:59:23.503: INFO: Waiting up to 5m0s for pod "pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d" in namespace "configmap-1185" to be "success or failure" Jan 22 12:59:23.517: INFO: Pod "pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.729543ms Jan 22 12:59:25.532: INFO: Pod "pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02816755s Jan 22 12:59:27.548: INFO: Pod "pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043947647s Jan 22 12:59:29.557: INFO: Pod "pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053675449s Jan 22 12:59:31.570: INFO: Pod "pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06681682s Jan 22 12:59:33.579: INFO: Pod "pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075413936s STEP: Saw pod success Jan 22 12:59:33.579: INFO: Pod "pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d" satisfied condition "success or failure" Jan 22 12:59:33.585: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d container configmap-volume-test: STEP: delete the pod Jan 22 12:59:33.731: INFO: Waiting for pod pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d to disappear Jan 22 12:59:33.789: INFO: Pod pod-configmaps-b77f68ae-b211-4142-849d-f556a1c5215d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 12:59:33.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1185" for this suite. Jan 22 12:59:39.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 12:59:39.958: INFO: namespace configmap-1185 deletion completed in 6.155106505s • [SLOW TEST:16.535 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 12:59:39.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 12:59:40.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3224' Jan 22 12:59:40.178: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 22 12:59:40.178: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jan 22 12:59:42.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3224' Jan 22 12:59:42.531: INFO: stderr: "" Jan 22 12:59:42.531: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 12:59:42.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3224" for this suite. Jan 22 12:59:48.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 12:59:48.702: INFO: namespace kubectl-3224 deletion completed in 6.160802793s • [SLOW TEST:8.743 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 12:59:48.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 12:59:48.847: INFO: Waiting up to 5m0s for pod "downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad" in namespace "downward-api-5787" to be "success or failure" Jan 22 12:59:48.872: INFO: Pod "downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad": Phase="Pending", Reason="", readiness=false. Elapsed: 25.216983ms Jan 22 12:59:50.883: INFO: Pod "downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036701286s Jan 22 12:59:53.372: INFO: Pod "downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.525062324s Jan 22 12:59:55.387: INFO: Pod "downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.540530483s Jan 22 12:59:57.396: INFO: Pod "downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.548837156s STEP: Saw pod success Jan 22 12:59:57.396: INFO: Pod "downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad" satisfied condition "success or failure" Jan 22 12:59:57.403: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad container client-container: STEP: delete the pod Jan 22 12:59:57.449: INFO: Waiting for pod downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad to disappear Jan 22 12:59:57.461: INFO: Pod downwardapi-volume-624765d0-e139-4a2e-a9fb-85b2db9a92ad no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 12:59:57.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5787" for this suite. Jan 22 13:00:03.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:00:03.851: INFO: namespace downward-api-5787 deletion completed in 6.309771648s • [SLOW TEST:15.149 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:00:03.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 13:00:03.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-900' Jan 22 13:00:04.114: INFO: stderr: "" Jan 22 13:00:04.114: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jan 22 13:00:04.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-900' Jan 22 13:00:08.312: INFO: stderr: "" Jan 22 13:00:08.312: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:00:08.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-900" for this suite. Jan 22 13:00:16.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:00:16.552: INFO: namespace kubectl-900 deletion completed in 8.199485379s • [SLOW TEST:12.701 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:00:16.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:00:16.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3" in namespace "projected-6765" to be "success or failure" Jan 22 13:00:16.666: INFO: Pod "downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3": Phase="Pending", Reason="", readiness=false. Elapsed: 31.332462ms Jan 22 13:00:18.680: INFO: Pod "downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046143575s Jan 22 13:00:20.692: INFO: Pod "downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057804255s Jan 22 13:00:22.709: INFO: Pod "downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0746475s Jan 22 13:00:24.939: INFO: Pod "downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3": Phase="Running", Reason="", readiness=true. Elapsed: 8.30435449s Jan 22 13:00:26.952: INFO: Pod "downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.318090626s STEP: Saw pod success Jan 22 13:00:26.952: INFO: Pod "downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3" satisfied condition "success or failure" Jan 22 13:00:26.959: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3 container client-container: STEP: delete the pod Jan 22 13:00:27.036: INFO: Waiting for pod downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3 to disappear Jan 22 13:00:27.043: INFO: Pod downwardapi-volume-9eb9f2cb-9a01-4c57-bb3a-49afc2a650c3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:00:27.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6765" for this suite. Jan 22 13:00:33.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:00:33.272: INFO: namespace projected-6765 deletion completed in 6.224140545s • [SLOW TEST:16.718 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:00:33.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 22 13:00:33.358: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix646952047/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:00:33.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3887" for this suite. Jan 22 13:00:39.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:00:39.674: INFO: namespace kubectl-3887 deletion completed in 6.209506962s • [SLOW TEST:6.402 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:00:39.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-53e336ce-f66f-42bb-9e03-723c74247d8a STEP: Creating a pod to test consume secrets Jan 22 13:00:39.818: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17" in namespace "projected-6177" to be "success or failure" Jan 22 13:00:39.835: INFO: Pod "pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17": Phase="Pending", Reason="", readiness=false. Elapsed: 16.781475ms Jan 22 13:00:41.850: INFO: Pod "pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032064242s Jan 22 13:00:43.876: INFO: Pod "pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057873816s Jan 22 13:00:45.886: INFO: Pod "pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06799461s Jan 22 13:00:47.920: INFO: Pod "pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102199461s Jan 22 13:00:49.934: INFO: Pod "pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115630341s STEP: Saw pod success Jan 22 13:00:49.934: INFO: Pod "pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17" satisfied condition "success or failure" Jan 22 13:00:49.939: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17 container projected-secret-volume-test: STEP: delete the pod Jan 22 13:00:50.057: INFO: Waiting for pod pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17 to disappear Jan 22 13:00:50.067: INFO: Pod pod-projected-secrets-ac03d04a-210e-4119-9ff7-b84dbce68d17 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:00:50.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6177" for this suite. Jan 22 13:00:56.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:00:56.275: INFO: namespace projected-6177 deletion completed in 6.166126002s • [SLOW TEST:16.600 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:00:56.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:01:56.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8864" for this suite. Jan 22 13:02:18.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:02:18.599: INFO: namespace container-probe-8864 deletion completed in 22.159115461s • [SLOW TEST:82.324 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:02:18.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 22 13:02:18.737: INFO: Waiting up to 5m0s for pod "downward-api-11265698-c6bd-4721-b02c-a0085b616e45" in namespace "downward-api-6690" to be "success or failure" Jan 22 13:02:18.745: INFO: Pod "downward-api-11265698-c6bd-4721-b02c-a0085b616e45": Phase="Pending", Reason="", readiness=false. Elapsed: 7.963849ms Jan 22 13:02:20.752: INFO: Pod "downward-api-11265698-c6bd-4721-b02c-a0085b616e45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014997248s Jan 22 13:02:22.764: INFO: Pod "downward-api-11265698-c6bd-4721-b02c-a0085b616e45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027179902s Jan 22 13:02:24.890: INFO: Pod "downward-api-11265698-c6bd-4721-b02c-a0085b616e45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153150703s Jan 22 13:02:26.911: INFO: Pod "downward-api-11265698-c6bd-4721-b02c-a0085b616e45": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173598921s Jan 22 13:02:28.929: INFO: Pod "downward-api-11265698-c6bd-4721-b02c-a0085b616e45": Phase="Pending", Reason="", readiness=false. Elapsed: 10.192208192s Jan 22 13:02:30.938: INFO: Pod "downward-api-11265698-c6bd-4721-b02c-a0085b616e45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.200805068s STEP: Saw pod success Jan 22 13:02:30.938: INFO: Pod "downward-api-11265698-c6bd-4721-b02c-a0085b616e45" satisfied condition "success or failure" Jan 22 13:02:30.942: INFO: Trying to get logs from node iruya-node pod downward-api-11265698-c6bd-4721-b02c-a0085b616e45 container dapi-container: STEP: delete the pod Jan 22 13:02:31.164: INFO: Waiting for pod downward-api-11265698-c6bd-4721-b02c-a0085b616e45 to disappear Jan 22 13:02:31.172: INFO: Pod downward-api-11265698-c6bd-4721-b02c-a0085b616e45 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:02:31.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6690" for this suite. Jan 22 13:02:37.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:02:37.393: INFO: namespace downward-api-6690 deletion completed in 6.211725121s • [SLOW TEST:18.793 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:02:37.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 22 13:03:05.673: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:05.673: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:05.754778 9 log.go:172] (0xc002b282c0) (0xc0012c5a40) Create stream I0122 13:03:05.754827 9 log.go:172] (0xc002b282c0) (0xc0012c5a40) Stream added, broadcasting: 1 I0122 13:03:05.764247 9 log.go:172] (0xc002b282c0) Reply frame received for 1 I0122 13:03:05.764348 9 log.go:172] (0xc002b282c0) (0xc00147a000) Create stream I0122 13:03:05.764362 9 log.go:172] (0xc002b282c0) (0xc00147a000) Stream added, broadcasting: 3 I0122 13:03:05.765992 9 log.go:172] (0xc002b282c0) Reply frame received for 3 I0122 13:03:05.766025 9 log.go:172] (0xc002b282c0) (0xc00147a1e0) Create stream I0122 13:03:05.766035 9 log.go:172] (0xc002b282c0) (0xc00147a1e0) Stream added, broadcasting: 5 I0122 13:03:05.767726 9 log.go:172] (0xc002b282c0) Reply frame received for 5 I0122 13:03:05.991908 9 log.go:172] (0xc002b282c0) Data frame received for 3 I0122 13:03:05.991956 9 log.go:172] (0xc00147a000) (3) Data frame handling I0122 13:03:05.991987 9 log.go:172] (0xc00147a000) (3) Data frame sent I0122 13:03:06.155685 9 log.go:172] (0xc002b282c0) (0xc00147a000) Stream removed, broadcasting: 3 I0122 13:03:06.156046 9 log.go:172] (0xc002b282c0) Data frame received for 1 I0122 13:03:06.156274 9 log.go:172] (0xc002b282c0) (0xc00147a1e0) Stream removed, broadcasting: 5 I0122 13:03:06.156495 9 log.go:172] (0xc0012c5a40) (1) Data frame handling I0122 13:03:06.156574 9 log.go:172] (0xc0012c5a40) (1) Data frame sent I0122 13:03:06.156632 9 log.go:172] (0xc002b282c0) (0xc0012c5a40) Stream removed, broadcasting: 1 I0122 13:03:06.156694 9 log.go:172] (0xc002b282c0) Go away received I0122 13:03:06.158928 9 log.go:172] (0xc002b282c0) (0xc0012c5a40) Stream removed, broadcasting: 1 I0122 13:03:06.159037 9 log.go:172] (0xc002b282c0) (0xc00147a000) Stream removed, broadcasting: 3 I0122 13:03:06.159094 9 log.go:172] (0xc002b282c0) (0xc00147a1e0) Stream removed, broadcasting: 5 Jan 22 13:03:06.159: INFO: Exec stderr: "" Jan 22 13:03:06.159: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:06.159: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:06.260159 9 log.go:172] (0xc002b28d10) (0xc000a44000) Create stream I0122 13:03:06.260421 9 log.go:172] (0xc002b28d10) (0xc000a44000) Stream added, broadcasting: 1 I0122 13:03:06.275503 9 log.go:172] (0xc002b28d10) Reply frame received for 1 I0122 13:03:06.275677 9 log.go:172] (0xc002b28d10) (0xc00147a320) Create stream I0122 13:03:06.275697 9 log.go:172] (0xc002b28d10) (0xc00147a320) Stream added, broadcasting: 3 I0122 13:03:06.279313 9 log.go:172] (0xc002b28d10) Reply frame received for 3 I0122 13:03:06.279431 9 log.go:172] (0xc002b28d10) (0xc001844140) Create stream I0122 13:03:06.279471 9 log.go:172] (0xc002b28d10) (0xc001844140) Stream added, broadcasting: 5 I0122 13:03:06.285793 9 log.go:172] (0xc002b28d10) Reply frame received for 5 I0122 13:03:06.452357 9 log.go:172] (0xc002b28d10) Data frame received for 3 I0122 13:03:06.452619 9 log.go:172] (0xc00147a320) (3) Data frame handling I0122 13:03:06.452669 9 log.go:172] (0xc00147a320) (3) Data frame sent I0122 13:03:06.649999 9 log.go:172] (0xc002b28d10) (0xc00147a320) Stream removed, broadcasting: 3 I0122 13:03:06.650313 9 log.go:172] (0xc002b28d10) (0xc001844140) Stream removed, broadcasting: 5 I0122 13:03:06.650537 9 log.go:172] (0xc002b28d10) Data frame received for 1 I0122 13:03:06.650592 9 log.go:172] (0xc000a44000) (1) Data frame handling I0122 13:03:06.650632 9 log.go:172] (0xc000a44000) (1) Data frame sent I0122 13:03:06.650649 9 log.go:172] (0xc002b28d10) (0xc000a44000) Stream removed, broadcasting: 1 I0122 13:03:06.650688 9 log.go:172] (0xc002b28d10) Go away received I0122 13:03:06.651094 9 log.go:172] (0xc002b28d10) (0xc000a44000) Stream removed, broadcasting: 1 I0122 13:03:06.651167 9 log.go:172] (0xc002b28d10) (0xc00147a320) Stream removed, broadcasting: 3 I0122 13:03:06.651193 9 log.go:172] (0xc002b28d10) (0xc001844140) Stream removed, broadcasting: 5 Jan 22 13:03:06.651: INFO: Exec stderr: "" Jan 22 13:03:06.651: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:06.651: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:06.782610 9 log.go:172] (0xc002b29600) (0xc000a446e0) Create stream I0122 13:03:06.782680 9 log.go:172] (0xc002b29600) (0xc000a446e0) Stream added, broadcasting: 1 I0122 13:03:06.793541 9 log.go:172] (0xc002b29600) Reply frame received for 1 I0122 13:03:06.793590 9 log.go:172] (0xc002b29600) (0xc0018441e0) Create stream I0122 13:03:06.793605 9 log.go:172] (0xc002b29600) (0xc0018441e0) Stream added, broadcasting: 3 I0122 13:03:06.795162 9 log.go:172] (0xc002b29600) Reply frame received for 3 I0122 13:03:06.795197 9 log.go:172] (0xc002b29600) (0xc0000245a0) Create stream I0122 13:03:06.795227 9 log.go:172] (0xc002b29600) (0xc0000245a0) Stream added, broadcasting: 5 I0122 13:03:06.801308 9 log.go:172] (0xc002b29600) Reply frame received for 5 I0122 13:03:06.987383 9 log.go:172] (0xc002b29600) Data frame received for 3 I0122 13:03:06.987450 9 log.go:172] (0xc0018441e0) (3) Data frame handling I0122 13:03:06.987483 9 log.go:172] (0xc0018441e0) (3) Data frame sent I0122 13:03:07.154977 9 log.go:172] (0xc002b29600) (0xc0018441e0) Stream removed, broadcasting: 3 I0122 13:03:07.155183 9 log.go:172] (0xc002b29600) Data frame received for 1 I0122 13:03:07.155211 9 log.go:172] (0xc000a446e0) (1) Data frame handling I0122 13:03:07.155261 9 log.go:172] (0xc000a446e0) (1) Data frame sent I0122 13:03:07.155281 9 log.go:172] (0xc002b29600) (0xc000a446e0) Stream removed, broadcasting: 1 I0122 13:03:07.155404 9 log.go:172] (0xc002b29600) (0xc0000245a0) Stream removed, broadcasting: 5 I0122 13:03:07.155692 9 log.go:172] (0xc002b29600) (0xc000a446e0) Stream removed, broadcasting: 1 I0122 13:03:07.155960 9 log.go:172] (0xc002b29600) (0xc0018441e0) Stream removed, broadcasting: 3 I0122 13:03:07.155982 9 log.go:172] (0xc002b29600) (0xc0000245a0) Stream removed, broadcasting: 5 Jan 22 13:03:07.156: INFO: Exec stderr: "" I0122 13:03:07.156088 9 log.go:172] (0xc002b29600) Go away received Jan 22 13:03:07.156: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:07.156: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:07.258408 9 log.go:172] (0xc000994dc0) (0xc0018443c0) Create stream I0122 13:03:07.258569 9 log.go:172] (0xc000994dc0) (0xc0018443c0) Stream added, broadcasting: 1 I0122 13:03:07.279170 9 log.go:172] (0xc000994dc0) Reply frame received for 1 I0122 13:03:07.279420 9 log.go:172] (0xc000994dc0) (0xc000024be0) Create stream I0122 13:03:07.279473 9 log.go:172] (0xc000994dc0) (0xc000024be0) Stream added, broadcasting: 3 I0122 13:03:07.282199 9 log.go:172] (0xc000994dc0) Reply frame received for 3 I0122 13:03:07.282267 9 log.go:172] (0xc000994dc0) (0xc0002375e0) Create stream I0122 13:03:07.282279 9 log.go:172] (0xc000994dc0) (0xc0002375e0) Stream added, broadcasting: 5 I0122 13:03:07.283993 9 log.go:172] (0xc000994dc0) Reply frame received for 5 I0122 13:03:07.421999 9 log.go:172] (0xc000994dc0) Data frame received for 3 I0122 13:03:07.422104 9 log.go:172] (0xc000024be0) (3) Data frame handling I0122 13:03:07.422144 9 log.go:172] (0xc000024be0) (3) Data frame sent I0122 13:03:07.525120 9 log.go:172] (0xc000994dc0) Data frame received for 1 I0122 13:03:07.525301 9 log.go:172] (0xc000994dc0) (0xc0002375e0) Stream removed, broadcasting: 5 I0122 13:03:07.525334 9 log.go:172] (0xc0018443c0) (1) Data frame handling I0122 13:03:07.525360 9 log.go:172] (0xc0018443c0) (1) Data frame sent I0122 13:03:07.525384 9 log.go:172] (0xc000994dc0) (0xc000024be0) Stream removed, broadcasting: 3 I0122 13:03:07.525420 9 log.go:172] (0xc000994dc0) (0xc0018443c0) Stream removed, broadcasting: 1 I0122 13:03:07.525444 9 log.go:172] (0xc000994dc0) Go away received I0122 13:03:07.525760 9 log.go:172] (0xc000994dc0) (0xc0018443c0) Stream removed, broadcasting: 1 I0122 13:03:07.525788 9 log.go:172] (0xc000994dc0) (0xc000024be0) Stream removed, broadcasting: 3 I0122 13:03:07.525800 9 log.go:172] (0xc000994dc0) (0xc0002375e0) Stream removed, broadcasting: 5 Jan 22 13:03:07.525: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 22 13:03:07.525: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:07.526: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:07.583321 9 log.go:172] (0xc001278420) (0xc001820460) Create stream I0122 13:03:07.583386 9 log.go:172] (0xc001278420) (0xc001820460) Stream added, broadcasting: 1 I0122 13:03:07.592101 9 log.go:172] (0xc001278420) Reply frame received for 1 I0122 13:03:07.592171 9 log.go:172] (0xc001278420) (0xc00147a460) Create stream I0122 13:03:07.592190 9 log.go:172] (0xc001278420) (0xc00147a460) Stream added, broadcasting: 3 I0122 13:03:07.593491 9 log.go:172] (0xc001278420) Reply frame received for 3 I0122 13:03:07.593517 9 log.go:172] (0xc001278420) (0xc001820500) Create stream I0122 13:03:07.593527 9 log.go:172] (0xc001278420) (0xc001820500) Stream added, broadcasting: 5 I0122 13:03:07.597791 9 log.go:172] (0xc001278420) Reply frame received for 5 I0122 13:03:07.675248 9 log.go:172] (0xc001278420) Data frame received for 3 I0122 13:03:07.675291 9 log.go:172] (0xc00147a460) (3) Data frame handling I0122 13:03:07.675322 9 log.go:172] (0xc00147a460) (3) Data frame sent I0122 13:03:07.766099 9 log.go:172] (0xc001278420) Data frame received for 1 I0122 13:03:07.766158 9 log.go:172] (0xc001278420) (0xc00147a460) Stream removed, broadcasting: 3 I0122 13:03:07.766223 9 log.go:172] (0xc001820460) (1) Data frame handling I0122 13:03:07.766247 9 log.go:172] (0xc001820460) (1) Data frame sent I0122 13:03:07.766264 9 log.go:172] (0xc001278420) (0xc001820460) Stream removed, broadcasting: 1 I0122 13:03:07.766720 9 log.go:172] (0xc001278420) (0xc001820500) Stream removed, broadcasting: 5 I0122 13:03:07.766757 9 log.go:172] (0xc001278420) Go away received I0122 13:03:07.767103 9 log.go:172] (0xc001278420) (0xc001820460) Stream removed, broadcasting: 1 I0122 13:03:07.767267 9 log.go:172] (0xc001278420) (0xc00147a460) Stream removed, broadcasting: 3 I0122 13:03:07.767290 9 log.go:172] (0xc001278420) (0xc001820500) Stream removed, broadcasting: 5 Jan 22 13:03:07.767: INFO: Exec stderr: "" Jan 22 13:03:07.767: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:07.767: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:07.822610 9 log.go:172] (0xc00143a0b0) (0xc000a44aa0) Create stream I0122 13:03:07.822642 9 log.go:172] (0xc00143a0b0) (0xc000a44aa0) Stream added, broadcasting: 1 I0122 13:03:07.828092 9 log.go:172] (0xc00143a0b0) Reply frame received for 1 I0122 13:03:07.828122 9 log.go:172] (0xc00143a0b0) (0xc00147a6e0) Create stream I0122 13:03:07.828127 9 log.go:172] (0xc00143a0b0) (0xc00147a6e0) Stream added, broadcasting: 3 I0122 13:03:07.829064 9 log.go:172] (0xc00143a0b0) Reply frame received for 3 I0122 13:03:07.829082 9 log.go:172] (0xc00143a0b0) (0xc001820780) Create stream I0122 13:03:07.829089 9 log.go:172] (0xc00143a0b0) (0xc001820780) Stream added, broadcasting: 5 I0122 13:03:07.830132 9 log.go:172] (0xc00143a0b0) Reply frame received for 5 I0122 13:03:07.913776 9 log.go:172] (0xc00143a0b0) Data frame received for 3 I0122 13:03:07.914057 9 log.go:172] (0xc00147a6e0) (3) Data frame handling I0122 13:03:07.914299 9 log.go:172] (0xc00147a6e0) (3) Data frame sent I0122 13:03:08.019144 9 log.go:172] (0xc00143a0b0) Data frame received for 1 I0122 13:03:08.019384 9 log.go:172] (0xc000a44aa0) (1) Data frame handling I0122 13:03:08.019447 9 log.go:172] (0xc000a44aa0) (1) Data frame sent I0122 13:03:08.020169 9 log.go:172] (0xc00143a0b0) (0xc000a44aa0) Stream removed, broadcasting: 1 I0122 13:03:08.021355 9 log.go:172] (0xc00143a0b0) (0xc00147a6e0) Stream removed, broadcasting: 3 I0122 13:03:08.021604 9 log.go:172] (0xc00143a0b0) (0xc001820780) Stream removed, broadcasting: 5 I0122 13:03:08.021708 9 log.go:172] (0xc00143a0b0) (0xc000a44aa0) Stream removed, broadcasting: 1 I0122 13:03:08.021727 9 log.go:172] (0xc00143a0b0) (0xc00147a6e0) Stream removed, broadcasting: 3 I0122 13:03:08.021740 9 log.go:172] (0xc00143a0b0) (0xc001820780) Stream removed, broadcasting: 5 I0122 13:03:08.022918 9 log.go:172] (0xc00143a0b0) Go away received Jan 22 13:03:08.023: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 22 13:03:08.023: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:08.023: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:08.097160 9 log.go:172] (0xc0012791e0) (0xc001820960) Create stream I0122 13:03:08.097226 9 log.go:172] (0xc0012791e0) (0xc001820960) Stream added, broadcasting: 1 I0122 13:03:08.106080 9 log.go:172] (0xc0012791e0) Reply frame received for 1 I0122 13:03:08.106101 9 log.go:172] (0xc0012791e0) (0xc00147a960) Create stream I0122 13:03:08.106107 9 log.go:172] (0xc0012791e0) (0xc00147a960) Stream added, broadcasting: 3 I0122 13:03:08.107081 9 log.go:172] (0xc0012791e0) Reply frame received for 3 I0122 13:03:08.107170 9 log.go:172] (0xc0012791e0) (0xc000237720) Create stream I0122 13:03:08.107177 9 log.go:172] (0xc0012791e0) (0xc000237720) Stream added, broadcasting: 5 I0122 13:03:08.110011 9 log.go:172] (0xc0012791e0) Reply frame received for 5 I0122 13:03:08.212117 9 log.go:172] (0xc0012791e0) Data frame received for 3 I0122 13:03:08.212165 9 log.go:172] (0xc00147a960) (3) Data frame handling I0122 13:03:08.212189 9 log.go:172] (0xc00147a960) (3) Data frame sent I0122 13:03:08.314284 9 log.go:172] (0xc0012791e0) (0xc00147a960) Stream removed, broadcasting: 3 I0122 13:03:08.314791 9 log.go:172] (0xc0012791e0) Data frame received for 1 I0122 13:03:08.314908 9 log.go:172] (0xc001820960) (1) Data frame handling I0122 13:03:08.315004 9 log.go:172] (0xc001820960) (1) Data frame sent I0122 13:03:08.315051 9 log.go:172] (0xc0012791e0) (0xc001820960) Stream removed, broadcasting: 1 I0122 13:03:08.315557 9 log.go:172] (0xc0012791e0) (0xc000237720) Stream removed, broadcasting: 5 I0122 13:03:08.315748 9 log.go:172] (0xc0012791e0) (0xc001820960) Stream removed, broadcasting: 1 I0122 13:03:08.315827 9 log.go:172] (0xc0012791e0) (0xc00147a960) Stream removed, broadcasting: 3 I0122 13:03:08.315898 9 log.go:172] (0xc0012791e0) (0xc000237720) Stream removed, broadcasting: 5 Jan 22 13:03:08.316: INFO: Exec stderr: "" Jan 22 13:03:08.316: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:08.316: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:08.370796 9 log.go:172] (0xc001279ef0) (0xc001820d20) Create stream I0122 13:03:08.370836 9 log.go:172] (0xc001279ef0) (0xc001820d20) Stream added, broadcasting: 1 I0122 13:03:08.380496 9 log.go:172] (0xc001279ef0) Reply frame received for 1 I0122 13:03:08.380576 9 log.go:172] (0xc001279ef0) (0xc001844500) Create stream I0122 13:03:08.380600 9 log.go:172] (0xc001279ef0) (0xc001844500) Stream added, broadcasting: 3 I0122 13:03:08.383187 9 log.go:172] (0xc001279ef0) Reply frame received for 3 I0122 13:03:08.383231 9 log.go:172] (0xc001279ef0) (0xc00147aa00) Create stream I0122 13:03:08.383246 9 log.go:172] (0xc001279ef0) (0xc00147aa00) Stream added, broadcasting: 5 I0122 13:03:08.389332 9 log.go:172] (0xc001279ef0) Reply frame received for 5 I0122 13:03:08.569930 9 log.go:172] (0xc001279ef0) Data frame received for 3 I0122 13:03:08.570000 9 log.go:172] (0xc001844500) (3) Data frame handling I0122 13:03:08.570039 9 log.go:172] (0xc001844500) (3) Data frame sent I0122 13:03:08.971899 9 log.go:172] (0xc001279ef0) Data frame received for 1 I0122 13:03:08.972105 9 log.go:172] (0xc001820d20) (1) Data frame handling I0122 13:03:08.972138 9 log.go:172] (0xc001820d20) (1) Data frame sent I0122 13:03:08.972164 9 log.go:172] (0xc001279ef0) (0xc001820d20) Stream removed, broadcasting: 1 I0122 13:03:08.980184 9 log.go:172] (0xc001279ef0) (0xc001844500) Stream removed, broadcasting: 3 I0122 13:03:08.980328 9 log.go:172] (0xc001279ef0) (0xc00147aa00) Stream removed, broadcasting: 5 I0122 13:03:08.980365 9 log.go:172] (0xc001279ef0) Go away received I0122 13:03:08.980467 9 log.go:172] (0xc001279ef0) (0xc001820d20) Stream removed, broadcasting: 1 I0122 13:03:08.980490 9 log.go:172] (0xc001279ef0) (0xc001844500) Stream removed, broadcasting: 3 I0122 13:03:08.980505 9 log.go:172] (0xc001279ef0) (0xc00147aa00) Stream removed, broadcasting: 5 Jan 22 13:03:08.980: INFO: Exec stderr: "" Jan 22 13:03:08.980: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:08.980: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:09.065661 9 log.go:172] (0xc00124ec60) (0xc000237c20) Create stream I0122 13:03:09.066102 9 log.go:172] (0xc00124ec60) (0xc000237c20) Stream added, broadcasting: 1 I0122 13:03:09.086001 9 log.go:172] (0xc00124ec60) Reply frame received for 1 I0122 13:03:09.086103 9 log.go:172] (0xc00124ec60) (0xc001820dc0) Create stream I0122 13:03:09.086119 9 log.go:172] (0xc00124ec60) (0xc001820dc0) Stream added, broadcasting: 3 I0122 13:03:09.088617 9 log.go:172] (0xc00124ec60) Reply frame received for 3 I0122 13:03:09.088654 9 log.go:172] (0xc00124ec60) (0xc000a44be0) Create stream I0122 13:03:09.088672 9 log.go:172] (0xc00124ec60) (0xc000a44be0) Stream added, broadcasting: 5 I0122 13:03:09.099934 9 log.go:172] (0xc00124ec60) Reply frame received for 5 I0122 13:03:09.287597 9 log.go:172] (0xc00124ec60) Data frame received for 3 I0122 13:03:09.287676 9 log.go:172] (0xc001820dc0) (3) Data frame handling I0122 13:03:09.287702 9 log.go:172] (0xc001820dc0) (3) Data frame sent I0122 13:03:09.445378 9 log.go:172] (0xc00124ec60) (0xc000a44be0) Stream removed, broadcasting: 5 I0122 13:03:09.445502 9 log.go:172] (0xc00124ec60) Data frame received for 1 I0122 13:03:09.445542 9 log.go:172] (0xc00124ec60) (0xc001820dc0) Stream removed, broadcasting: 3 I0122 13:03:09.445610 9 log.go:172] (0xc000237c20) (1) Data frame handling I0122 13:03:09.445632 9 log.go:172] (0xc000237c20) (1) Data frame sent I0122 13:03:09.445643 9 log.go:172] (0xc00124ec60) (0xc000237c20) Stream removed, broadcasting: 1 I0122 13:03:09.445663 9 log.go:172] (0xc00124ec60) Go away received I0122 13:03:09.446364 9 log.go:172] (0xc00124ec60) (0xc000237c20) Stream removed, broadcasting: 1 I0122 13:03:09.446404 9 log.go:172] (0xc00124ec60) (0xc001820dc0) Stream removed, broadcasting: 3 I0122 13:03:09.446420 9 log.go:172] (0xc00124ec60) (0xc000a44be0) Stream removed, broadcasting: 5 Jan 22 13:03:09.446: INFO: Exec stderr: "" Jan 22 13:03:09.446: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2026 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:03:09.446: INFO: >>> kubeConfig: /root/.kube/config I0122 13:03:09.522743 9 log.go:172] (0xc001a94210) (0xc001844960) Create stream I0122 13:03:09.523015 9 log.go:172] (0xc001a94210) (0xc001844960) Stream added, broadcasting: 1 I0122 13:03:09.528970 9 log.go:172] (0xc001a94210) Reply frame received for 1 I0122 13:03:09.529058 9 log.go:172] (0xc001a94210) (0xc000a44d20) Create stream I0122 13:03:09.529083 9 log.go:172] (0xc001a94210) (0xc000a44d20) Stream added, broadcasting: 3 I0122 13:03:09.530369 9 log.go:172] (0xc001a94210) Reply frame received for 3 I0122 13:03:09.530403 9 log.go:172] (0xc001a94210) (0xc000024c80) Create stream I0122 13:03:09.530418 9 log.go:172] (0xc001a94210) (0xc000024c80) Stream added, broadcasting: 5 I0122 13:03:09.531946 9 log.go:172] (0xc001a94210) Reply frame received for 5 I0122 13:03:09.666143 9 log.go:172] (0xc001a94210) Data frame received for 3 I0122 13:03:09.666211 9 log.go:172] (0xc000a44d20) (3) Data frame handling I0122 13:03:09.666245 9 log.go:172] (0xc000a44d20) (3) Data frame sent I0122 13:03:09.786501 9 log.go:172] (0xc001a94210) Data frame received for 1 I0122 13:03:09.786621 9 log.go:172] (0xc001a94210) (0xc000a44d20) Stream removed, broadcasting: 3 I0122 13:03:09.786682 9 log.go:172] (0xc001844960) (1) Data frame handling I0122 13:03:09.786709 9 log.go:172] (0xc001844960) (1) Data frame sent I0122 13:03:09.786746 9 log.go:172] (0xc001a94210) (0xc000024c80) Stream removed, broadcasting: 5 I0122 13:03:09.786880 9 log.go:172] (0xc001a94210) (0xc001844960) Stream removed, broadcasting: 1 I0122 13:03:09.786908 9 log.go:172] (0xc001a94210) Go away received I0122 13:03:09.787727 9 log.go:172] (0xc001a94210) (0xc001844960) Stream removed, broadcasting: 1 I0122 13:03:09.787892 9 log.go:172] (0xc001a94210) (0xc000a44d20) Stream removed, broadcasting: 3 I0122 13:03:09.787922 9 log.go:172] (0xc001a94210) (0xc000024c80) Stream removed, broadcasting: 5 Jan 22 13:03:09.787: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:03:09.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2026" for this suite. Jan 22 13:04:13.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:04:13.955: INFO: namespace e2e-kubelet-etc-hosts-2026 deletion completed in 1m4.154239249s • [SLOW TEST:96.561 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:04:13.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-3733 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3733 to expose endpoints map[] Jan 22 13:04:14.214: INFO: successfully validated that service multi-endpoint-test in namespace services-3733 exposes endpoints map[] (21.224081ms elapsed) STEP: Creating pod pod1 in namespace services-3733 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3733 to expose endpoints map[pod1:[100]] Jan 22 13:04:18.447: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.218747263s elapsed, will retry) Jan 22 13:04:23.614: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.385547113s elapsed, will retry) Jan 22 13:04:24.666: INFO: successfully validated that service multi-endpoint-test in namespace services-3733 exposes endpoints map[pod1:[100]] (10.437422575s elapsed) STEP: Creating pod pod2 in namespace services-3733 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3733 to expose endpoints map[pod1:[100] pod2:[101]] Jan 22 13:04:31.651: INFO: Unexpected endpoints: found map[b73dd18b-80a4-4e20-95f1-aacdda9fef0c:[100]], expected map[pod1:[100] pod2:[101]] (6.947262027s elapsed, will retry) Jan 22 13:04:33.866: INFO: successfully validated that service multi-endpoint-test in namespace services-3733 exposes endpoints map[pod1:[100] pod2:[101]] (9.16227315s elapsed) STEP: Deleting pod pod1 in namespace services-3733 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3733 to expose endpoints map[pod2:[101]] Jan 22 13:04:34.944: INFO: successfully validated that service multi-endpoint-test in namespace services-3733 exposes endpoints map[pod2:[101]] (1.071385967s elapsed) STEP: Deleting pod pod2 in namespace services-3733 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3733 to expose endpoints map[] Jan 22 13:04:35.975: INFO: successfully validated that service multi-endpoint-test in namespace services-3733 exposes endpoints map[] (1.023032839s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:04:37.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3733" for this suite. Jan 22 13:04:59.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:04:59.211: INFO: namespace services-3733 deletion completed in 22.116400569s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:45.256 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:04:59.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-27f0371f-3246-4ffa-a9f2-284538b12d92 STEP: Creating a pod to test consume configMaps Jan 22 13:04:59.357: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96" in namespace "projected-7402" to be "success or failure" Jan 22 13:04:59.447: INFO: Pod "pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96": Phase="Pending", Reason="", readiness=false. Elapsed: 90.696436ms Jan 22 13:05:01.458: INFO: Pod "pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101045217s Jan 22 13:05:03.469: INFO: Pod "pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112411594s Jan 22 13:05:05.476: INFO: Pod "pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119613469s Jan 22 13:05:07.485: INFO: Pod "pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128020157s Jan 22 13:05:09.494: INFO: Pod "pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137004486s Jan 22 13:05:11.507: INFO: Pod "pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.150301678s STEP: Saw pod success Jan 22 13:05:11.507: INFO: Pod "pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96" satisfied condition "success or failure" Jan 22 13:05:11.513: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96 container projected-configmap-volume-test: STEP: delete the pod Jan 22 13:05:11.566: INFO: Waiting for pod pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96 to disappear Jan 22 13:05:11.573: INFO: Pod pod-projected-configmaps-596ad08a-1609-4943-8388-38794e037c96 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:05:11.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7402" for this suite. Jan 22 13:05:17.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:05:17.722: INFO: namespace projected-7402 deletion completed in 6.141269271s • [SLOW TEST:18.510 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:05:17.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:05:17.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902" in namespace "downward-api-9896" to be "success or failure" Jan 22 13:05:17.875: INFO: Pod "downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902": Phase="Pending", Reason="", readiness=false. Elapsed: 9.608723ms Jan 22 13:05:19.900: INFO: Pod "downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034644997s Jan 22 13:05:21.980: INFO: Pod "downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114972457s Jan 22 13:05:23.986: INFO: Pod "downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120870359s Jan 22 13:05:25.999: INFO: Pod "downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134226079s Jan 22 13:05:28.006: INFO: Pod "downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.14131294s STEP: Saw pod success Jan 22 13:05:28.007: INFO: Pod "downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902" satisfied condition "success or failure" Jan 22 13:05:28.010: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902 container client-container: STEP: delete the pod Jan 22 13:05:28.348: INFO: Waiting for pod downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902 to disappear Jan 22 13:05:28.360: INFO: Pod downwardapi-volume-26694c36-edec-42dc-9388-d33bad632902 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:05:28.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9896" for this suite. Jan 22 13:05:34.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:05:34.580: INFO: namespace downward-api-9896 deletion completed in 6.193369247s • [SLOW TEST:16.857 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:05:34.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 13:05:34.758: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:05:43.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3199" for this suite. Jan 22 13:06:29.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:06:29.528: INFO: namespace pods-3199 deletion completed in 46.242901496s • [SLOW TEST:54.947 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:06:29.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 22 13:06:38.770: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:06:39.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7766" for this suite. Jan 22 13:07:03.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:07:04.007: INFO: namespace replicaset-7766 deletion completed in 24.169396363s • [SLOW TEST:34.479 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:07:04.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jan 22 13:07:04.226: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3493" to be "success or failure" Jan 22 13:07:04.241: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.170522ms Jan 22 13:07:06.250: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02369735s Jan 22 13:07:08.258: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031632044s Jan 22 13:07:10.269: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042198827s Jan 22 13:07:12.292: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065210297s Jan 22 13:07:14.319: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.092043205s Jan 22 13:07:16.352: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125894823s Jan 22 13:07:18.395: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.168412222s STEP: Saw pod success Jan 22 13:07:18.395: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 22 13:07:18.399: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 22 13:07:18.449: INFO: Waiting for pod pod-host-path-test to disappear Jan 22 13:07:18.465: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:07:18.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3493" for this suite. Jan 22 13:07:24.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:07:24.635: INFO: namespace hostpath-3493 deletion completed in 6.16365987s • [SLOW TEST:20.627 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:07:24.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 13:07:24.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-101' Jan 22 13:07:27.122: INFO: stderr: "" Jan 22 13:07:27.123: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 22 13:07:27.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-101' Jan 22 13:07:27.662: INFO: stderr: "" Jan 22 13:07:27.662: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 22 13:07:28.672: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:07:28.672: INFO: Found 0 / 1 Jan 22 13:07:29.677: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:07:29.677: INFO: Found 0 / 1 Jan 22 13:07:30.670: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:07:30.670: INFO: Found 0 / 1 Jan 22 13:07:31.677: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:07:31.677: INFO: Found 0 / 1 Jan 22 13:07:32.678: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:07:32.678: INFO: Found 0 / 1 Jan 22 13:07:33.677: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:07:33.677: INFO: Found 0 / 1 Jan 22 13:07:34.675: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:07:34.675: INFO: Found 0 / 1 Jan 22 13:07:35.677: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:07:35.677: INFO: Found 1 / 1 Jan 22 13:07:35.677: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 22 13:07:35.681: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:07:35.681: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 22 13:07:35.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-q9q9s --namespace=kubectl-101' Jan 22 13:07:35.875: INFO: stderr: "" Jan 22 13:07:35.875: INFO: stdout: "Name: redis-master-q9q9s\nNamespace: kubectl-101\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Wed, 22 Jan 2020 13:07:27 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://de97999b2ebea7c9bfb4e7db9dafbe2a2bc575c08bd69834a814d3b14cfa29f6\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 22 Jan 2020 13:07:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-mpx7p (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-mpx7p:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-mpx7p\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-101/redis-master-q9q9s to iruya-node\n Normal Pulled 5s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Jan 22 13:07:35.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-101' Jan 22 13:07:35.990: INFO: stderr: "" Jan 22 13:07:35.990: INFO: stdout: "Name: redis-master\nNamespace: kubectl-101\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: redis-master-q9q9s\n" Jan 22 13:07:35.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-101' Jan 22 13:07:36.121: INFO: stderr: "" Jan 22 13:07:36.121: INFO: stdout: "Name: redis-master\nNamespace: kubectl-101\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.108.201.73\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 22 13:07:36.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Jan 22 13:07:36.245: INFO: stderr: "" Jan 22 13:07:36.245: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Wed, 22 Jan 2020 13:07:24 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 22 Jan 2020 13:07:24 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 22 Jan 2020 13:07:24 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 22 Jan 2020 13:07:24 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 171d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 102d\n kubectl-101 redis-master-q9q9s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 22 13:07:36.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-101' Jan 22 13:07:36.371: INFO: stderr: "" Jan 22 13:07:36.371: INFO: stdout: "Name: kubectl-101\nLabels: e2e-framework=kubectl\n e2e-run=6dee1230-77c7-42dc-9023-44bbcaf16993\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:07:36.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-101" for this suite. Jan 22 13:07:58.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:07:58.527: INFO: namespace kubectl-101 deletion completed in 22.151387291s • [SLOW TEST:33.892 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:07:58.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8092 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8092 STEP: Creating statefulset with conflicting port in namespace statefulset-8092 STEP: Waiting until pod test-pod will start running in namespace statefulset-8092 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8092 Jan 22 13:08:10.728: INFO: Observed stateful pod in namespace: statefulset-8092, name: ss-0, uid: d2cea6d7-6982-409c-a813-0b2d7032ee29, status phase: Pending. Waiting for statefulset controller to delete. Jan 22 13:08:16.503: INFO: Observed stateful pod in namespace: statefulset-8092, name: ss-0, uid: d2cea6d7-6982-409c-a813-0b2d7032ee29, status phase: Failed. Waiting for statefulset controller to delete. Jan 22 13:08:16.549: INFO: Observed stateful pod in namespace: statefulset-8092, name: ss-0, uid: d2cea6d7-6982-409c-a813-0b2d7032ee29, status phase: Failed. Waiting for statefulset controller to delete. Jan 22 13:08:16.560: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8092 STEP: Removing pod with conflicting port in namespace statefulset-8092 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8092 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 22 13:08:28.808: INFO: Deleting all statefulset in ns statefulset-8092 Jan 22 13:08:28.814: INFO: Scaling statefulset ss to 0 Jan 22 13:08:48.846: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 13:08:48.856: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:08:48.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8092" for this suite. Jan 22 13:08:54.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:08:55.051: INFO: namespace statefulset-8092 deletion completed in 6.124411262s • [SLOW TEST:56.523 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:08:55.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-76ef0dc4-2046-4dd4-8a2a-8440b4cbd43a [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:08:55.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3233" for this suite. Jan 22 13:09:01.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:09:01.347: INFO: namespace secrets-3233 deletion completed in 6.149936708s • [SLOW TEST:6.296 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:09:01.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 22 13:09:01.571: INFO: Waiting up to 5m0s for pod "downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef" in namespace "downward-api-6300" to be "success or failure" Jan 22 13:09:01.599: INFO: Pod "downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef": Phase="Pending", Reason="", readiness=false. Elapsed: 27.549951ms Jan 22 13:09:03.609: INFO: Pod "downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037790443s Jan 22 13:09:05.837: INFO: Pod "downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265377677s Jan 22 13:09:07.844: INFO: Pod "downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27276324s Jan 22 13:09:09.861: INFO: Pod "downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.289938865s Jan 22 13:09:11.882: INFO: Pod "downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef": Phase="Pending", Reason="", readiness=false. Elapsed: 10.310752585s Jan 22 13:09:13.895: INFO: Pod "downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.323568076s STEP: Saw pod success Jan 22 13:09:13.895: INFO: Pod "downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef" satisfied condition "success or failure" Jan 22 13:09:13.916: INFO: Trying to get logs from node iruya-node pod downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef container dapi-container: STEP: delete the pod Jan 22 13:09:14.080: INFO: Waiting for pod downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef to disappear Jan 22 13:09:14.083: INFO: Pod downward-api-d8366289-2e8a-4430-9441-94adb1dd17ef no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:09:14.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6300" for this suite. Jan 22 13:09:20.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:09:20.243: INFO: namespace downward-api-6300 deletion completed in 6.156748434s • [SLOW TEST:18.895 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:09:20.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:09:20.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1415" for this suite. Jan 22 13:09:26.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:09:26.553: INFO: namespace services-1415 deletion completed in 6.178878854s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.309 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:09:26.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-89c943a4-d1a6-4b42-b76b-57c6af1af05d STEP: Creating secret with name s-test-opt-upd-861c7f72-ef87-4a83-b5ae-7a56c52aba35 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-89c943a4-d1a6-4b42-b76b-57c6af1af05d STEP: Updating secret s-test-opt-upd-861c7f72-ef87-4a83-b5ae-7a56c52aba35 STEP: Creating secret with name s-test-opt-create-9898557c-73ee-47b4-b689-5e88c73a6825 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:09:43.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2335" for this suite. Jan 22 13:10:05.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:10:05.310: INFO: namespace secrets-2335 deletion completed in 22.161883261s • [SLOW TEST:38.755 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:10:05.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:10:05.447: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634" in namespace "projected-3020" to be "success or failure" Jan 22 13:10:05.458: INFO: Pod "downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634": Phase="Pending", Reason="", readiness=false. Elapsed: 11.430679ms Jan 22 13:10:07.491: INFO: Pod "downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044417353s Jan 22 13:10:09.499: INFO: Pod "downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051973668s Jan 22 13:10:11.509: INFO: Pod "downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062273161s Jan 22 13:10:13.636: INFO: Pod "downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189601371s Jan 22 13:10:15.647: INFO: Pod "downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634": Phase="Pending", Reason="", readiness=false. Elapsed: 10.200530474s Jan 22 13:10:17.656: INFO: Pod "downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.209447085s STEP: Saw pod success Jan 22 13:10:17.656: INFO: Pod "downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634" satisfied condition "success or failure" Jan 22 13:10:17.659: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634 container client-container: STEP: delete the pod Jan 22 13:10:17.725: INFO: Waiting for pod downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634 to disappear Jan 22 13:10:17.733: INFO: Pod downwardapi-volume-1beed879-1356-499c-ae3e-27210f68c634 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:10:17.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3020" for this suite. Jan 22 13:10:23.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:10:24.005: INFO: namespace projected-3020 deletion completed in 6.18272395s • [SLOW TEST:18.696 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:10:24.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-2ac3fd0c-b3cd-4b45-9d2b-e9797b1650af STEP: Creating a pod to test consume configMaps Jan 22 13:10:24.117: INFO: Waiting up to 5m0s for pod "pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3" in namespace "configmap-8483" to be "success or failure" Jan 22 13:10:24.149: INFO: Pod "pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.09737ms Jan 22 13:10:26.171: INFO: Pod "pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053499742s Jan 22 13:10:28.183: INFO: Pod "pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066118736s Jan 22 13:10:30.191: INFO: Pod "pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074336641s Jan 22 13:10:32.205: INFO: Pod "pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087918577s Jan 22 13:10:34.211: INFO: Pod "pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094377032s STEP: Saw pod success Jan 22 13:10:34.212: INFO: Pod "pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3" satisfied condition "success or failure" Jan 22 13:10:34.215: INFO: Trying to get logs from node iruya-node pod pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3 container configmap-volume-test: STEP: delete the pod Jan 22 13:10:34.278: INFO: Waiting for pod pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3 to disappear Jan 22 13:10:34.304: INFO: Pod pod-configmaps-90b75ee8-bd2d-4f08-a807-b68d3d3cd1a3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:10:34.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8483" for this suite. Jan 22 13:10:40.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:10:40.615: INFO: namespace configmap-8483 deletion completed in 6.298474805s • [SLOW TEST:16.609 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:10:40.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 13:10:40.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2593' Jan 22 13:10:40.830: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 22 13:10:40.830: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 22 13:10:40.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-2593' Jan 22 13:10:40.985: INFO: stderr: "" Jan 22 13:10:40.985: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:10:40.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2593" for this suite. Jan 22 13:11:03.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:11:03.166: INFO: namespace kubectl-2593 deletion completed in 22.174539356s • [SLOW TEST:22.549 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:11:03.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 22 13:11:11.459: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:11:11.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1736" for this suite. Jan 22 13:11:17.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:11:17.703: INFO: namespace container-runtime-1736 deletion completed in 6.198463691s • [SLOW TEST:14.537 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:11:17.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2600 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2600 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2600 Jan 22 13:11:17.884: INFO: Found 0 stateful pods, waiting for 1 Jan 22 13:11:27.903: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 22 13:11:27.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 13:11:28.629: INFO: stderr: "I0122 13:11:28.120508 529 log.go:172] (0xc0008840b0) (0xc0007f0640) Create stream\nI0122 13:11:28.120751 529 log.go:172] (0xc0008840b0) (0xc0007f0640) Stream added, broadcasting: 1\nI0122 13:11:28.130853 529 log.go:172] (0xc0008840b0) Reply frame received for 1\nI0122 13:11:28.130932 529 log.go:172] (0xc0008840b0) (0xc0007f06e0) Create stream\nI0122 13:11:28.130940 529 log.go:172] (0xc0008840b0) (0xc0007f06e0) Stream added, broadcasting: 3\nI0122 13:11:28.139113 529 log.go:172] (0xc0008840b0) Reply frame received for 3\nI0122 13:11:28.139189 529 log.go:172] (0xc0008840b0) (0xc000648320) Create stream\nI0122 13:11:28.139202 529 log.go:172] (0xc0008840b0) (0xc000648320) Stream added, broadcasting: 5\nI0122 13:11:28.144594 529 log.go:172] (0xc0008840b0) Reply frame received for 5\nI0122 13:11:28.427064 529 log.go:172] (0xc0008840b0) Data frame received for 5\nI0122 13:11:28.427190 529 log.go:172] (0xc000648320) (5) Data frame handling\nI0122 13:11:28.427209 529 log.go:172] (0xc000648320) (5) Data frame sent\nI0122 13:11:28.427218 529 log.go:172] (0xc0008840b0) Data frame received for 5\nI0122 13:11:28.427226 529 log.go:172] (0xc000648320) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 13:11:28.427249 529 log.go:172] (0xc000648320) (5) Data frame sent\nI0122 13:11:28.489850 529 log.go:172] (0xc0008840b0) Data frame received for 3\nI0122 13:11:28.489924 529 log.go:172] (0xc0007f06e0) (3) Data frame handling\nI0122 13:11:28.489935 529 log.go:172] (0xc0007f06e0) (3) Data frame sent\nI0122 13:11:28.626048 529 log.go:172] (0xc0008840b0) (0xc0007f06e0) Stream removed, broadcasting: 3\nI0122 13:11:28.626226 529 log.go:172] (0xc0008840b0) Data frame received for 1\nI0122 13:11:28.626237 529 log.go:172] (0xc0007f0640) (1) Data frame handling\nI0122 13:11:28.626246 529 log.go:172] (0xc0007f0640) (1) Data frame sent\nI0122 13:11:28.626274 529 log.go:172] (0xc0008840b0) (0xc0007f0640) Stream removed, broadcasting: 1\nI0122 13:11:28.626299 529 log.go:172] (0xc0008840b0) (0xc000648320) Stream removed, broadcasting: 5\nI0122 13:11:28.626373 529 log.go:172] (0xc0008840b0) Go away received\nI0122 13:11:28.626513 529 log.go:172] (0xc0008840b0) (0xc0007f0640) Stream removed, broadcasting: 1\nI0122 13:11:28.626531 529 log.go:172] (0xc0008840b0) (0xc0007f06e0) Stream removed, broadcasting: 3\nI0122 13:11:28.626573 529 log.go:172] (0xc0008840b0) (0xc000648320) Stream removed, broadcasting: 5\n" Jan 22 13:11:28.630: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 13:11:28.630: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 13:11:28.636: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 22 13:11:38.656: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 22 13:11:38.656: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 13:11:38.690: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998823s Jan 22 13:11:39.703: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992076645s Jan 22 13:11:40.710: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978814689s Jan 22 13:11:41.718: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971421435s Jan 22 13:11:42.730: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.963498848s Jan 22 13:11:43.739: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.951788573s Jan 22 13:11:44.749: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.942449908s Jan 22 13:11:45.757: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.932441621s Jan 22 13:11:46.766: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.924609384s Jan 22 13:11:47.774: INFO: Verifying statefulset ss doesn't scale past 1 for another 915.620499ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2600 Jan 22 13:11:48.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:11:49.312: INFO: stderr: "I0122 13:11:49.010423 545 log.go:172] (0xc0007fa580) (0xc0007994a0) Create stream\nI0122 13:11:49.010513 545 log.go:172] (0xc0007fa580) (0xc0007994a0) Stream added, broadcasting: 1\nI0122 13:11:49.019278 545 log.go:172] (0xc0007fa580) Reply frame received for 1\nI0122 13:11:49.019324 545 log.go:172] (0xc0007fa580) (0xc0006a1c20) Create stream\nI0122 13:11:49.019346 545 log.go:172] (0xc0007fa580) (0xc0006a1c20) Stream added, broadcasting: 3\nI0122 13:11:49.021475 545 log.go:172] (0xc0007fa580) Reply frame received for 3\nI0122 13:11:49.021532 545 log.go:172] (0xc0007fa580) (0xc00085a000) Create stream\nI0122 13:11:49.021548 545 log.go:172] (0xc0007fa580) (0xc00085a000) Stream added, broadcasting: 5\nI0122 13:11:49.023458 545 log.go:172] (0xc0007fa580) Reply frame received for 5\nI0122 13:11:49.155146 545 log.go:172] (0xc0007fa580) Data frame received for 5\nI0122 13:11:49.155414 545 log.go:172] (0xc00085a000) (5) Data frame handling\nI0122 13:11:49.155428 545 log.go:172] (0xc00085a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0122 13:11:49.155446 545 log.go:172] (0xc0007fa580) Data frame received for 3\nI0122 13:11:49.155454 545 log.go:172] (0xc0006a1c20) (3) Data frame handling\nI0122 13:11:49.155467 545 log.go:172] (0xc0006a1c20) (3) Data frame sent\nI0122 13:11:49.305464 545 log.go:172] (0xc0007fa580) (0xc0006a1c20) Stream removed, broadcasting: 3\nI0122 13:11:49.305567 545 log.go:172] (0xc0007fa580) Data frame received for 1\nI0122 13:11:49.305574 545 log.go:172] (0xc0007994a0) (1) Data frame handling\nI0122 13:11:49.305581 545 log.go:172] (0xc0007994a0) (1) Data frame sent\nI0122 13:11:49.305588 545 log.go:172] (0xc0007fa580) (0xc0007994a0) Stream removed, broadcasting: 1\nI0122 13:11:49.305628 545 log.go:172] (0xc0007fa580) (0xc00085a000) Stream removed, broadcasting: 5\nI0122 13:11:49.305670 545 log.go:172] (0xc0007fa580) Go away received\nI0122 13:11:49.305898 545 log.go:172] (0xc0007fa580) (0xc0007994a0) Stream removed, broadcasting: 1\nI0122 13:11:49.305916 545 log.go:172] (0xc0007fa580) (0xc0006a1c20) Stream removed, broadcasting: 3\nI0122 13:11:49.305925 545 log.go:172] (0xc0007fa580) (0xc00085a000) Stream removed, broadcasting: 5\n" Jan 22 13:11:49.313: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 13:11:49.313: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 13:11:49.321: INFO: Found 1 stateful pods, waiting for 3 Jan 22 13:11:59.349: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 13:11:59.349: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 13:11:59.349: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 22 13:12:09.333: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 13:12:09.333: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 13:12:09.333: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 22 13:12:09.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 13:12:09.889: INFO: stderr: "I0122 13:12:09.522117 561 log.go:172] (0xc00064cb00) (0xc0005508c0) Create stream\nI0122 13:12:09.522396 561 log.go:172] (0xc00064cb00) (0xc0005508c0) Stream added, broadcasting: 1\nI0122 13:12:09.529732 561 log.go:172] (0xc00064cb00) Reply frame received for 1\nI0122 13:12:09.529780 561 log.go:172] (0xc00064cb00) (0xc0007b0000) Create stream\nI0122 13:12:09.529788 561 log.go:172] (0xc00064cb00) (0xc0007b0000) Stream added, broadcasting: 3\nI0122 13:12:09.531696 561 log.go:172] (0xc00064cb00) Reply frame received for 3\nI0122 13:12:09.531717 561 log.go:172] (0xc00064cb00) (0xc00082c000) Create stream\nI0122 13:12:09.531725 561 log.go:172] (0xc00064cb00) (0xc00082c000) Stream added, broadcasting: 5\nI0122 13:12:09.533387 561 log.go:172] (0xc00064cb00) Reply frame received for 5\nI0122 13:12:09.640209 561 log.go:172] (0xc00064cb00) Data frame received for 3\nI0122 13:12:09.640258 561 log.go:172] (0xc0007b0000) (3) Data frame handling\nI0122 13:12:09.640273 561 log.go:172] (0xc0007b0000) (3) Data frame sent\nI0122 13:12:09.640293 561 log.go:172] (0xc00064cb00) Data frame received for 5\nI0122 13:12:09.640300 561 log.go:172] (0xc00082c000) (5) Data frame handling\nI0122 13:12:09.640306 561 log.go:172] (0xc00082c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 13:12:09.882728 561 log.go:172] (0xc00064cb00) (0xc0007b0000) Stream removed, broadcasting: 3\nI0122 13:12:09.882834 561 log.go:172] (0xc00064cb00) Data frame received for 1\nI0122 13:12:09.882851 561 log.go:172] (0xc00064cb00) (0xc00082c000) Stream removed, broadcasting: 5\nI0122 13:12:09.882872 561 log.go:172] (0xc0005508c0) (1) Data frame handling\nI0122 13:12:09.882880 561 log.go:172] (0xc0005508c0) (1) Data frame sent\nI0122 13:12:09.882895 561 log.go:172] (0xc00064cb00) (0xc0005508c0) Stream removed, broadcasting: 1\nI0122 13:12:09.882927 561 log.go:172] (0xc00064cb00) Go away received\nI0122 13:12:09.883678 561 log.go:172] (0xc00064cb00) (0xc0005508c0) Stream removed, broadcasting: 1\nI0122 13:12:09.883687 561 log.go:172] (0xc00064cb00) (0xc0007b0000) Stream removed, broadcasting: 3\nI0122 13:12:09.883691 561 log.go:172] (0xc00064cb00) (0xc00082c000) Stream removed, broadcasting: 5\n" Jan 22 13:12:09.889: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 13:12:09.889: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 13:12:09.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 13:12:10.325: INFO: stderr: "I0122 13:12:10.041897 581 log.go:172] (0xc0007ea2c0) (0xc0000f88c0) Create stream\nI0122 13:12:10.042081 581 log.go:172] (0xc0007ea2c0) (0xc0000f88c0) Stream added, broadcasting: 1\nI0122 13:12:10.047624 581 log.go:172] (0xc0007ea2c0) Reply frame received for 1\nI0122 13:12:10.047661 581 log.go:172] (0xc0007ea2c0) (0xc0000f8960) Create stream\nI0122 13:12:10.047671 581 log.go:172] (0xc0007ea2c0) (0xc0000f8960) Stream added, broadcasting: 3\nI0122 13:12:10.048871 581 log.go:172] (0xc0007ea2c0) Reply frame received for 3\nI0122 13:12:10.048919 581 log.go:172] (0xc0007ea2c0) (0xc0006e2000) Create stream\nI0122 13:12:10.048933 581 log.go:172] (0xc0007ea2c0) (0xc0006e2000) Stream added, broadcasting: 5\nI0122 13:12:10.050808 581 log.go:172] (0xc0007ea2c0) Reply frame received for 5\nI0122 13:12:10.167879 581 log.go:172] (0xc0007ea2c0) Data frame received for 5\nI0122 13:12:10.167961 581 log.go:172] (0xc0006e2000) (5) Data frame handling\nI0122 13:12:10.167975 581 log.go:172] (0xc0006e2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 13:12:10.202511 581 log.go:172] (0xc0007ea2c0) Data frame received for 3\nI0122 13:12:10.202538 581 log.go:172] (0xc0000f8960) (3) Data frame handling\nI0122 13:12:10.202571 581 log.go:172] (0xc0000f8960) (3) Data frame sent\nI0122 13:12:10.314522 581 log.go:172] (0xc0007ea2c0) Data frame received for 1\nI0122 13:12:10.314852 581 log.go:172] (0xc0000f88c0) (1) Data frame handling\nI0122 13:12:10.314873 581 log.go:172] (0xc0000f88c0) (1) Data frame sent\nI0122 13:12:10.315612 581 log.go:172] (0xc0007ea2c0) (0xc0006e2000) Stream removed, broadcasting: 5\nI0122 13:12:10.315849 581 log.go:172] (0xc0007ea2c0) (0xc0000f88c0) Stream removed, broadcasting: 1\nI0122 13:12:10.315987 581 log.go:172] (0xc0007ea2c0) (0xc0000f8960) Stream removed, broadcasting: 3\nI0122 13:12:10.316038 581 log.go:172] (0xc0007ea2c0) Go away received\nI0122 13:12:10.316791 581 log.go:172] (0xc0007ea2c0) (0xc0000f88c0) Stream removed, broadcasting: 1\nI0122 13:12:10.316829 581 log.go:172] (0xc0007ea2c0) (0xc0000f8960) Stream removed, broadcasting: 3\nI0122 13:12:10.316838 581 log.go:172] (0xc0007ea2c0) (0xc0006e2000) Stream removed, broadcasting: 5\n" Jan 22 13:12:10.326: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 13:12:10.326: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 13:12:10.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 13:12:11.114: INFO: stderr: "I0122 13:12:10.651623 600 log.go:172] (0xc00092e630) (0xc0008fcc80) Create stream\nI0122 13:12:10.652146 600 log.go:172] (0xc00092e630) (0xc0008fcc80) Stream added, broadcasting: 1\nI0122 13:12:10.666822 600 log.go:172] (0xc00092e630) Reply frame received for 1\nI0122 13:12:10.667045 600 log.go:172] (0xc00092e630) (0xc0008fc000) Create stream\nI0122 13:12:10.667084 600 log.go:172] (0xc00092e630) (0xc0008fc000) Stream added, broadcasting: 3\nI0122 13:12:10.673351 600 log.go:172] (0xc00092e630) Reply frame received for 3\nI0122 13:12:10.673465 600 log.go:172] (0xc00092e630) (0xc0005d21e0) Create stream\nI0122 13:12:10.673479 600 log.go:172] (0xc00092e630) (0xc0005d21e0) Stream added, broadcasting: 5\nI0122 13:12:10.675389 600 log.go:172] (0xc00092e630) Reply frame received for 5\nI0122 13:12:10.838599 600 log.go:172] (0xc00092e630) Data frame received for 5\nI0122 13:12:10.838694 600 log.go:172] (0xc0005d21e0) (5) Data frame handling\nI0122 13:12:10.838709 600 log.go:172] (0xc0005d21e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 13:12:10.908583 600 log.go:172] (0xc00092e630) Data frame received for 3\nI0122 13:12:10.908646 600 log.go:172] (0xc0008fc000) (3) Data frame handling\nI0122 13:12:10.908661 600 log.go:172] (0xc0008fc000) (3) Data frame sent\nI0122 13:12:11.105992 600 log.go:172] (0xc00092e630) Data frame received for 1\nI0122 13:12:11.106113 600 log.go:172] (0xc00092e630) (0xc0008fc000) Stream removed, broadcasting: 3\nI0122 13:12:11.106199 600 log.go:172] (0xc0008fcc80) (1) Data frame handling\nI0122 13:12:11.106218 600 log.go:172] (0xc0008fcc80) (1) Data frame sent\nI0122 13:12:11.106233 600 log.go:172] (0xc00092e630) (0xc0008fcc80) Stream removed, broadcasting: 1\nI0122 13:12:11.106691 600 log.go:172] (0xc00092e630) (0xc0005d21e0) Stream removed, broadcasting: 5\nI0122 13:12:11.106716 600 log.go:172] (0xc00092e630) (0xc0008fcc80) Stream removed, broadcasting: 1\nI0122 13:12:11.106721 600 log.go:172] (0xc00092e630) (0xc0008fc000) Stream removed, broadcasting: 3\nI0122 13:12:11.106725 600 log.go:172] (0xc00092e630) (0xc0005d21e0) Stream removed, broadcasting: 5\n" Jan 22 13:12:11.114: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 13:12:11.114: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 13:12:11.114: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 13:12:11.128: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 22 13:12:21.160: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 22 13:12:21.160: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 22 13:12:21.160: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 22 13:12:21.184: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999531s Jan 22 13:12:22.195: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993768735s Jan 22 13:12:23.203: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983022319s Jan 22 13:12:24.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975343293s Jan 22 13:12:25.223: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.967945241s Jan 22 13:12:26.235: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.95459307s Jan 22 13:12:27.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.942402701s Jan 22 13:12:28.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.921690974s Jan 22 13:12:29.351: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.843600967s Jan 22 13:12:30.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 827.032247ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2600 Jan 22 13:12:31.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:12:31.953: INFO: stderr: "I0122 13:12:31.601262 616 log.go:172] (0xc00074c0b0) (0xc00082e640) Create stream\nI0122 13:12:31.601345 616 log.go:172] (0xc00074c0b0) (0xc00082e640) Stream added, broadcasting: 1\nI0122 13:12:31.615027 616 log.go:172] (0xc00074c0b0) Reply frame received for 1\nI0122 13:12:31.615093 616 log.go:172] (0xc00074c0b0) (0xc000626320) Create stream\nI0122 13:12:31.615108 616 log.go:172] (0xc00074c0b0) (0xc000626320) Stream added, broadcasting: 3\nI0122 13:12:31.619621 616 log.go:172] (0xc00074c0b0) Reply frame received for 3\nI0122 13:12:31.619649 616 log.go:172] (0xc00074c0b0) (0xc00082e6e0) Create stream\nI0122 13:12:31.619662 616 log.go:172] (0xc00074c0b0) (0xc00082e6e0) Stream added, broadcasting: 5\nI0122 13:12:31.621594 616 log.go:172] (0xc00074c0b0) Reply frame received for 5\nI0122 13:12:31.755578 616 log.go:172] (0xc00074c0b0) Data frame received for 3\nI0122 13:12:31.755656 616 log.go:172] (0xc000626320) (3) Data frame handling\nI0122 13:12:31.755663 616 log.go:172] (0xc000626320) (3) Data frame sent\nI0122 13:12:31.755695 616 log.go:172] (0xc00074c0b0) Data frame received for 5\nI0122 13:12:31.755699 616 log.go:172] (0xc00082e6e0) (5) Data frame handling\nI0122 13:12:31.755705 616 log.go:172] (0xc00082e6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0122 13:12:31.944892 616 log.go:172] (0xc00074c0b0) Data frame received for 1\nI0122 13:12:31.945018 616 log.go:172] (0xc00082e640) (1) Data frame handling\nI0122 13:12:31.945036 616 log.go:172] (0xc00082e640) (1) Data frame sent\nI0122 13:12:31.945860 616 log.go:172] (0xc00074c0b0) (0xc00082e640) Stream removed, broadcasting: 1\nI0122 13:12:31.946712 616 log.go:172] (0xc00074c0b0) (0xc000626320) Stream removed, broadcasting: 3\nI0122 13:12:31.947007 616 log.go:172] (0xc00074c0b0) (0xc00082e6e0) Stream removed, broadcasting: 5\nI0122 13:12:31.947043 616 log.go:172] (0xc00074c0b0) (0xc00082e640) Stream removed, broadcasting: 1\nI0122 13:12:31.947051 616 log.go:172] (0xc00074c0b0) (0xc000626320) Stream removed, broadcasting: 3\nI0122 13:12:31.947054 616 log.go:172] (0xc00074c0b0) (0xc00082e6e0) Stream removed, broadcasting: 5\n" Jan 22 13:12:31.953: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 13:12:31.953: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 13:12:31.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:12:32.565: INFO: stderr: "I0122 13:12:32.119399 634 log.go:172] (0xc000820370) (0xc0003b95e0) Create stream\nI0122 13:12:32.119590 634 log.go:172] (0xc000820370) (0xc0003b95e0) Stream added, broadcasting: 1\nI0122 13:12:32.124159 634 log.go:172] (0xc000820370) Reply frame received for 1\nI0122 13:12:32.124399 634 log.go:172] (0xc000820370) (0xc0004e25a0) Create stream\nI0122 13:12:32.124418 634 log.go:172] (0xc000820370) (0xc0004e25a0) Stream added, broadcasting: 3\nI0122 13:12:32.125821 634 log.go:172] (0xc000820370) Reply frame received for 3\nI0122 13:12:32.125847 634 log.go:172] (0xc000820370) (0xc0006ee000) Create stream\nI0122 13:12:32.125864 634 log.go:172] (0xc000820370) (0xc0006ee000) Stream added, broadcasting: 5\nI0122 13:12:32.128117 634 log.go:172] (0xc000820370) Reply frame received for 5\nI0122 13:12:32.235627 634 log.go:172] (0xc000820370) Data frame received for 5\nI0122 13:12:32.235737 634 log.go:172] (0xc0006ee000) (5) Data frame handling\nI0122 13:12:32.235756 634 log.go:172] (0xc0006ee000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0122 13:12:32.236094 634 log.go:172] (0xc000820370) Data frame received for 3\nI0122 13:12:32.236107 634 log.go:172] (0xc0004e25a0) (3) Data frame handling\nI0122 13:12:32.236124 634 log.go:172] (0xc0004e25a0) (3) Data frame sent\nI0122 13:12:32.560683 634 log.go:172] (0xc000820370) Data frame received for 1\nI0122 13:12:32.560806 634 log.go:172] (0xc000820370) (0xc0004e25a0) Stream removed, broadcasting: 3\nI0122 13:12:32.561039 634 log.go:172] (0xc0003b95e0) (1) Data frame handling\nI0122 13:12:32.561051 634 log.go:172] (0xc0003b95e0) (1) Data frame sent\nI0122 13:12:32.561058 634 log.go:172] (0xc000820370) (0xc0003b95e0) Stream removed, broadcasting: 1\nI0122 13:12:32.561237 634 log.go:172] (0xc000820370) (0xc0006ee000) Stream removed, broadcasting: 5\nI0122 13:12:32.561255 634 log.go:172] (0xc000820370) (0xc0003b95e0) Stream removed, broadcasting: 1\nI0122 13:12:32.561264 634 log.go:172] (0xc000820370) (0xc0004e25a0) Stream removed, broadcasting: 3\nI0122 13:12:32.561271 634 log.go:172] (0xc000820370) (0xc0006ee000) Stream removed, broadcasting: 5\n" Jan 22 13:12:32.566: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 13:12:32.566: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 13:12:32.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:12:33.123: INFO: rc: 126 Jan 22 13:12:33.123: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown I0122 13:12:32.846782 646 log.go:172] (0xc0007f44d0) (0xc0004d2780) Create stream I0122 13:12:32.846978 646 log.go:172] (0xc0007f44d0) (0xc0004d2780) Stream added, broadcasting: 1 I0122 13:12:32.884539 646 log.go:172] (0xc0007f44d0) Reply frame received for 1 I0122 13:12:32.884721 646 log.go:172] (0xc0007f44d0) (0xc0008e4000) Create stream I0122 13:12:32.884739 646 log.go:172] (0xc0007f44d0) (0xc0008e4000) Stream added, broadcasting: 3 I0122 13:12:32.893590 646 log.go:172] (0xc0007f44d0) Reply frame received for 3 I0122 13:12:32.893617 646 log.go:172] (0xc0007f44d0) (0xc0007c4280) Create stream I0122 13:12:32.893625 646 log.go:172] (0xc0007f44d0) (0xc0007c4280) Stream added, broadcasting: 5 I0122 13:12:32.901721 646 log.go:172] (0xc0007f44d0) Reply frame received for 5 I0122 13:12:33.117638 646 log.go:172] (0xc0007f44d0) Data frame received for 3 I0122 13:12:33.117723 646 log.go:172] (0xc0008e4000) (3) Data frame handling I0122 13:12:33.117738 646 log.go:172] (0xc0008e4000) (3) Data frame sent I0122 13:12:33.119284 646 log.go:172] (0xc0007f44d0) Data frame received for 1 I0122 13:12:33.119320 646 log.go:172] (0xc0007f44d0) (0xc0008e4000) Stream removed, broadcasting: 3 I0122 13:12:33.119355 646 log.go:172] (0xc0004d2780) (1) Data frame handling I0122 13:12:33.119382 646 log.go:172] (0xc0004d2780) (1) Data frame sent I0122 13:12:33.119391 646 log.go:172] (0xc0007f44d0) (0xc0004d2780) Stream removed, broadcasting: 1 I0122 13:12:33.119402 646 log.go:172] (0xc0007f44d0) (0xc0007c4280) Stream removed, broadcasting: 5 I0122 13:12:33.119414 646 log.go:172] (0xc0007f44d0) Go away received I0122 13:12:33.119817 646 log.go:172] (0xc0007f44d0) (0xc0004d2780) Stream removed, broadcasting: 1 I0122 13:12:33.119856 646 log.go:172] (0xc0007f44d0) (0xc0008e4000) Stream removed, broadcasting: 3 I0122 13:12:33.119864 646 log.go:172] (0xc0007f44d0) (0xc0007c4280) Stream removed, broadcasting: 5 command terminated with exit code 126 [] 0xc0026f80c0 exit status 126 true [0xc001edc400 0xc001edc418 0xc001edc430] [0xc001edc400 0xc001edc418 0xc001edc430] [0xc001edc410 0xc001edc428] [0xba6c50 0xba6c50] 0xc0024e6c00 }: Command stdout: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown stderr: I0122 13:12:32.846782 646 log.go:172] (0xc0007f44d0) (0xc0004d2780) Create stream I0122 13:12:32.846978 646 log.go:172] (0xc0007f44d0) (0xc0004d2780) Stream added, broadcasting: 1 I0122 13:12:32.884539 646 log.go:172] (0xc0007f44d0) Reply frame received for 1 I0122 13:12:32.884721 646 log.go:172] (0xc0007f44d0) (0xc0008e4000) Create stream I0122 13:12:32.884739 646 log.go:172] (0xc0007f44d0) (0xc0008e4000) Stream added, broadcasting: 3 I0122 13:12:32.893590 646 log.go:172] (0xc0007f44d0) Reply frame received for 3 I0122 13:12:32.893617 646 log.go:172] (0xc0007f44d0) (0xc0007c4280) Create stream I0122 13:12:32.893625 646 log.go:172] (0xc0007f44d0) (0xc0007c4280) Stream added, broadcasting: 5 I0122 13:12:32.901721 646 log.go:172] (0xc0007f44d0) Reply frame received for 5 I0122 13:12:33.117638 646 log.go:172] (0xc0007f44d0) Data frame received for 3 I0122 13:12:33.117723 646 log.go:172] (0xc0008e4000) (3) Data frame handling I0122 13:12:33.117738 646 log.go:172] (0xc0008e4000) (3) Data frame sent I0122 13:12:33.119284 646 log.go:172] (0xc0007f44d0) Data frame received for 1 I0122 13:12:33.119320 646 log.go:172] (0xc0007f44d0) (0xc0008e4000) Stream removed, broadcasting: 3 I0122 13:12:33.119355 646 log.go:172] (0xc0004d2780) (1) Data frame handling I0122 13:12:33.119382 646 log.go:172] (0xc0004d2780) (1) Data frame sent I0122 13:12:33.119391 646 log.go:172] (0xc0007f44d0) (0xc0004d2780) Stream removed, broadcasting: 1 I0122 13:12:33.119402 646 log.go:172] (0xc0007f44d0) (0xc0007c4280) Stream removed, broadcasting: 5 I0122 13:12:33.119414 646 log.go:172] (0xc0007f44d0) Go away received I0122 13:12:33.119817 646 log.go:172] (0xc0007f44d0) (0xc0004d2780) Stream removed, broadcasting: 1 I0122 13:12:33.119856 646 log.go:172] (0xc0007f44d0) (0xc0008e4000) Stream removed, broadcasting: 3 I0122 13:12:33.119864 646 log.go:172] (0xc0007f44d0) (0xc0007c4280) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Jan 22 13:12:43.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:12:43.322: INFO: rc: 1 Jan 22 13:12:43.323: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0033d25a0 exit status 1 true [0xc0000eb050 0xc0000eb0e8 0xc0000eb178] [0xc0000eb050 0xc0000eb0e8 0xc0000eb178] [0xc0000eb098 0xc0000eb168] [0xba6c50 0xba6c50] 0xc0022c8a20 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 22 13:12:53.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:12:53.502: INFO: rc: 1 Jan 22 13:12:53.503: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024b8090 exit status 1 true [0xc000700ae0 0xc000700ba0 0xc000700d40] [0xc000700ae0 0xc000700ba0 0xc000700d40] [0xc000700b30 0xc000700ce0] [0xba6c50 0xba6c50] 0xc002b72300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:13:03.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:13:03.676: INFO: rc: 1 Jan 22 13:13:03.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a0f0 exit status 1 true [0xc000010010 0xc002f5c028 0xc002f5c060] [0xc000010010 0xc002f5c028 0xc002f5c060] [0xc002f5c008 0xc002f5c038] [0xba6c50 0xba6c50] 0xc001b64240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:13:13.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:13:13.819: INFO: rc: 1 Jan 22 13:13:13.820: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028460c0 exit status 1 true [0xc002328000 0xc002328018 0xc002328030] [0xc002328000 0xc002328018 0xc002328030] [0xc002328010 0xc002328028] [0xba6c50 0xba6c50] 0xc00253a8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:13:23.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:13:23.985: INFO: rc: 1 Jan 22 13:13:23.986: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a1b0 exit status 1 true [0xc002f5c068 0xc002f5c098 0xc002f5c0d0] [0xc002f5c068 0xc002f5c098 0xc002f5c0d0] [0xc002f5c090 0xc002f5c0b8] [0xba6c50 0xba6c50] 0xc001b64540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:13:33.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:13:34.088: INFO: rc: 1 Jan 22 13:13:34.088: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a270 exit status 1 true [0xc002f5c0d8 0xc002f5c110 0xc002f5c138] [0xc002f5c0d8 0xc002f5c110 0xc002f5c138] [0xc002f5c0f0 0xc002f5c130] [0xba6c50 0xba6c50] 0xc001b64a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:13:44.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:13:44.193: INFO: rc: 1 Jan 22 13:13:44.194: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d6c0c0 exit status 1 true [0xc002694000 0xc002694018 0xc002694050] [0xc002694000 0xc002694018 0xc002694050] [0xc002694010 0xc002694040] [0xba6c50 0xba6c50] 0xc002568de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:13:54.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:13:54.319: INFO: rc: 1 Jan 22 13:13:54.319: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a330 exit status 1 true [0xc002f5c148 0xc002f5c178 0xc002f5c1d0] [0xc002f5c148 0xc002f5c178 0xc002f5c1d0] [0xc002f5c170 0xc002f5c1b8] [0xba6c50 0xba6c50] 0xc001b65e00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:14:04.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:14:04.529: INFO: rc: 1 Jan 22 13:14:04.529: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024b81b0 exit status 1 true [0xc000700de8 0xc000700f00 0xc000701048] [0xc000700de8 0xc000700f00 0xc000701048] [0xc000700ec0 0xc000701000] [0xba6c50 0xba6c50] 0xc002b726c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:14:14.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:14:14.687: INFO: rc: 1 Jan 22 13:14:14.687: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024b82a0 exit status 1 true [0xc000701078 0xc000701258 0xc000701488] [0xc000701078 0xc000701258 0xc000701488] [0xc0007011a8 0xc0007013f8] [0xba6c50 0xba6c50] 0xc002b72a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:14:24.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:14:24.841: INFO: rc: 1 Jan 22 13:14:24.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028461b0 exit status 1 true [0xc002328038 0xc002328050 0xc002328068] [0xc002328038 0xc002328050 0xc002328068] [0xc002328048 0xc002328060] [0xba6c50 0xba6c50] 0xc00253bb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:14:34.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:14:35.030: INFO: rc: 1 Jan 22 13:14:35.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002846270 exit status 1 true [0xc002328070 0xc002328088 0xc0023280a0] [0xc002328070 0xc002328088 0xc0023280a0] [0xc002328080 0xc002328098] [0xba6c50 0xba6c50] 0xc002cce120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:14:45.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:14:45.172: INFO: rc: 1 Jan 22 13:14:45.173: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a420 exit status 1 true [0xc002f5c208 0xc002f5c230 0xc002f5c270] [0xc002f5c208 0xc002f5c230 0xc002f5c270] [0xc002f5c228 0xc002f5c258] [0xba6c50 0xba6c50] 0xc001e767e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:14:55.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:14:55.356: INFO: rc: 1 Jan 22 13:14:55.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028460f0 exit status 1 true [0xc000010010 0xc002328010 0xc002328028] [0xc000010010 0xc002328010 0xc002328028] [0xc002328008 0xc002328020] [0xba6c50 0xba6c50] 0xc00253a8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:15:05.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:15:05.546: INFO: rc: 1 Jan 22 13:15:05.547: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a090 exit status 1 true [0xc002f5c000 0xc002f5c030 0xc002f5c068] [0xc002f5c000 0xc002f5c030 0xc002f5c068] [0xc002f5c028 0xc002f5c060] [0xba6c50 0xba6c50] 0xc001b64240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:15:15.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:15:15.723: INFO: rc: 1 Jan 22 13:15:15.723: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a180 exit status 1 true [0xc002f5c078 0xc002f5c0a0 0xc002f5c0d8] [0xc002f5c078 0xc002f5c0a0 0xc002f5c0d8] [0xc002f5c098 0xc002f5c0d0] [0xba6c50 0xba6c50] 0xc001b64540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:15:25.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:15:25.925: INFO: rc: 1 Jan 22 13:15:25.925: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a2d0 exit status 1 true [0xc002f5c0e0 0xc002f5c118 0xc002f5c148] [0xc002f5c0e0 0xc002f5c118 0xc002f5c148] [0xc002f5c110 0xc002f5c138] [0xba6c50 0xba6c50] 0xc001b64a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:15:35.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:15:36.106: INFO: rc: 1 Jan 22 13:15:36.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a4e0 exit status 1 true [0xc002f5c160 0xc002f5c1a0 0xc002f5c288] [0xc002f5c160 0xc002f5c1a0 0xc002f5c288] [0xc002f5c178 0xc002f5c1d0] [0xba6c50 0xba6c50] 0xc001b65e00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:15:46.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:15:46.256: INFO: rc: 1 Jan 22 13:15:46.256: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024b80c0 exit status 1 true [0xc000700ae0 0xc000700ba0 0xc000700d40] [0xc000700ae0 0xc000700ba0 0xc000700d40] [0xc000700b30 0xc000700ce0] [0xba6c50 0xba6c50] 0xc002cce3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:15:56.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:15:56.429: INFO: rc: 1 Jan 22 13:15:56.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002846240 exit status 1 true [0xc002328030 0xc002328048 0xc002328060] [0xc002328030 0xc002328048 0xc002328060] [0xc002328040 0xc002328058] [0xba6c50 0xba6c50] 0xc00253bb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:16:06.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:16:06.617: INFO: rc: 1 Jan 22 13:16:06.617: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d6c090 exit status 1 true [0xc002694000 0xc002694018 0xc002694050] [0xc002694000 0xc002694018 0xc002694050] [0xc002694010 0xc002694040] [0xba6c50 0xba6c50] 0xc001e76d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:16:16.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:16:16.748: INFO: rc: 1 Jan 22 13:16:16.748: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024b8210 exit status 1 true [0xc000700de8 0xc000700f00 0xc000701048] [0xc000700de8 0xc000700f00 0xc000701048] [0xc000700ec0 0xc000701000] [0xba6c50 0xba6c50] 0xc002ccf860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:16:26.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:16:26.843: INFO: rc: 1 Jan 22 13:16:26.844: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024b8330 exit status 1 true [0xc000701078 0xc000701258 0xc000701488] [0xc000701078 0xc000701258 0xc000701488] [0xc0007011a8 0xc0007013f8] [0xba6c50 0xba6c50] 0xc002b72360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:16:36.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:16:37.030: INFO: rc: 1 Jan 22 13:16:37.030: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a5d0 exit status 1 true [0xc002f5c2a8 0xc002f5c2d0 0xc002f5c2e8] [0xc002f5c2a8 0xc002f5c2d0 0xc002f5c2e8] [0xc002f5c2c8 0xc002f5c2e0] [0xba6c50 0xba6c50] 0xc002568de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:16:47.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:16:47.242: INFO: rc: 1 Jan 22 13:16:47.242: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a0c0 exit status 1 true [0xc002f5c000 0xc002f5c030 0xc002f5c068] [0xc002f5c000 0xc002f5c030 0xc002f5c068] [0xc002f5c028 0xc002f5c060] [0xba6c50 0xba6c50] 0xc002cce3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:16:57.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:16:57.374: INFO: rc: 1 Jan 22 13:16:57.374: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024b8090 exit status 1 true [0xc000700ae0 0xc000700ba0 0xc000700d40] [0xc000700ae0 0xc000700ba0 0xc000700d40] [0xc000700b30 0xc000700ce0] [0xba6c50 0xba6c50] 0xc001b64240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:17:07.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:17:07.579: INFO: rc: 1 Jan 22 13:17:07.580: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d6c0f0 exit status 1 true [0xc002694000 0xc002694018 0xc002694050] [0xc002694000 0xc002694018 0xc002694050] [0xc002694010 0xc002694040] [0xba6c50 0xba6c50] 0xc002568de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:17:17.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:17:17.746: INFO: rc: 1 Jan 22 13:17:17.747: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00257a1e0 exit status 1 true [0xc002f5c078 0xc002f5c0a0 0xc002f5c0d8] [0xc002f5c078 0xc002f5c0a0 0xc002f5c0d8] [0xc002f5c098 0xc002f5c0d0] [0xba6c50 0xba6c50] 0xc002ccf860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:17:27.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:17:29.995: INFO: rc: 1 Jan 22 13:17:29.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024b81e0 exit status 1 true [0xc000700de8 0xc000700f00 0xc000701048] [0xc000700de8 0xc000700f00 0xc000701048] [0xc000700ec0 0xc000701000] [0xba6c50 0xba6c50] 0xc001b64540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 13:17:39.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 13:17:40.186: INFO: rc: 1 Jan 22 13:17:40.186: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 22 13:17:40.186: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 22 13:17:40.210: INFO: Deleting all statefulset in ns statefulset-2600 Jan 22 13:17:40.214: INFO: Scaling statefulset ss to 0 Jan 22 13:17:40.224: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 13:17:40.227: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:17:40.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2600" for this suite. Jan 22 13:17:46.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:17:46.474: INFO: namespace statefulset-2600 deletion completed in 6.153986602s • [SLOW TEST:388.771 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:17:46.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:17:46.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe" in namespace "projected-253" to be "success or failure" Jan 22 13:17:46.715: INFO: Pod "downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.807822ms Jan 22 13:17:48.725: INFO: Pod "downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013305975s Jan 22 13:17:50.739: INFO: Pod "downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027424896s Jan 22 13:17:52.746: INFO: Pod "downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034413784s Jan 22 13:17:54.755: INFO: Pod "downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042911378s Jan 22 13:17:56.765: INFO: Pod "downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053445756s STEP: Saw pod success Jan 22 13:17:56.765: INFO: Pod "downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe" satisfied condition "success or failure" Jan 22 13:17:56.770: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe container client-container: STEP: delete the pod Jan 22 13:17:56.824: INFO: Waiting for pod downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe to disappear Jan 22 13:17:56.830: INFO: Pod downwardapi-volume-c46eb29f-d045-4c9d-8a1f-1b267aa93cfe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:17:56.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-253" for this suite. Jan 22 13:18:02.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:18:03.010: INFO: namespace projected-253 deletion completed in 6.174596547s • [SLOW TEST:16.534 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:18:03.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-3368 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3368 STEP: Deleting pre-stop pod Jan 22 13:18:24.230: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:18:24.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3368" for this suite. Jan 22 13:19:08.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:19:08.502: INFO: namespace prestop-3368 deletion completed in 44.233510982s • [SLOW TEST:65.490 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:19:08.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:19:16.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7775" for this suite. Jan 22 13:20:08.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:20:08.848: INFO: namespace kubelet-test-7775 deletion completed in 52.12991321s • [SLOW TEST:60.345 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:20:08.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 13:20:08.934: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:20:10.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2842" for this suite. Jan 22 13:20:16.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:20:16.248: INFO: namespace custom-resource-definition-2842 deletion completed in 6.185278529s • [SLOW TEST:7.395 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:20:16.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:20:16.384: INFO: Waiting up to 5m0s for pod "downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8" in namespace "downward-api-6782" to be "success or failure" Jan 22 13:20:16.390: INFO: Pod "downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012458ms Jan 22 13:20:18.398: INFO: Pod "downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014018096s Jan 22 13:20:20.403: INFO: Pod "downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019408404s Jan 22 13:20:22.414: INFO: Pod "downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030496224s Jan 22 13:20:24.429: INFO: Pod "downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044774987s STEP: Saw pod success Jan 22 13:20:24.429: INFO: Pod "downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8" satisfied condition "success or failure" Jan 22 13:20:24.436: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8 container client-container: STEP: delete the pod Jan 22 13:20:24.495: INFO: Waiting for pod downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8 to disappear Jan 22 13:20:24.568: INFO: Pod downwardapi-volume-757e4d6c-4ee0-4a79-8866-3906100f78c8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:20:24.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6782" for this suite. Jan 22 13:20:30.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:20:30.914: INFO: namespace downward-api-6782 deletion completed in 6.331453702s • [SLOW TEST:14.665 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:20:30.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 13:20:31.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6567' Jan 22 13:20:31.130: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 22 13:20:31.130: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 22 13:20:31.170: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 22 13:20:31.212: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 22 13:20:31.228: INFO: scanned /root for discovery docs: Jan 22 13:20:31.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6567' Jan 22 13:20:50.695: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 22 13:20:50.695: INFO: stdout: "Created e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7\nScaling up e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 22 13:20:50.695: INFO: stdout: "Created e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7\nScaling up e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 22 13:20:50.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:20:50.826: INFO: stderr: "" Jan 22 13:20:50.826: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:20:55.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:20:55.967: INFO: stderr: "" Jan 22 13:20:55.968: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:00.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:01.107: INFO: stderr: "" Jan 22 13:21:01.107: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:06.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:06.261: INFO: stderr: "" Jan 22 13:21:06.261: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:11.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:11.455: INFO: stderr: "" Jan 22 13:21:11.455: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:16.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:16.610: INFO: stderr: "" Jan 22 13:21:16.610: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:21.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:21.768: INFO: stderr: "" Jan 22 13:21:21.768: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:26.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:26.937: INFO: stderr: "" Jan 22 13:21:26.937: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:31.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:32.045: INFO: stderr: "" Jan 22 13:21:32.045: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:37.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:37.185: INFO: stderr: "" Jan 22 13:21:37.185: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:42.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:42.329: INFO: stderr: "" Jan 22 13:21:42.329: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:47.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:47.511: INFO: stderr: "" Jan 22 13:21:47.512: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:52.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:52.676: INFO: stderr: "" Jan 22 13:21:52.676: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:21:57.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:21:57.843: INFO: stderr: "" Jan 22 13:21:57.843: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:22:02.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:22:03.016: INFO: stderr: "" Jan 22 13:22:03.016: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:22:08.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:22:08.169: INFO: stderr: "" Jan 22 13:22:08.169: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:22:13.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:22:13.276: INFO: stderr: "" Jan 22 13:22:13.276: INFO: stdout: "e2e-test-nginx-rc-6t6qq e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 22 13:22:18.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:22:18.429: INFO: stderr: "" Jan 22 13:22:18.429: INFO: stdout: "e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd " Jan 22 13:22:18.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6567' Jan 22 13:22:18.551: INFO: stderr: "" Jan 22 13:22:18.551: INFO: stdout: "true" Jan 22 13:22:18.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6567' Jan 22 13:22:18.639: INFO: stderr: "" Jan 22 13:22:18.639: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 22 13:22:18.639: INFO: e2e-test-nginx-rc-b5e61fc33072cf6f6d0a04535b27a0a7-wnggd is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jan 22 13:22:18.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6567' Jan 22 13:22:18.728: INFO: stderr: "" Jan 22 13:22:18.728: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:22:18.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6567" for this suite. Jan 22 13:22:40.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:22:40.962: INFO: namespace kubectl-6567 deletion completed in 22.228688261s • [SLOW TEST:130.048 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:22:40.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bccf7b85-ad20-4b12-b201-c4bf2d5d210d STEP: Creating a pod to test consume secrets Jan 22 13:22:41.127: INFO: Waiting up to 5m0s for pod "pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934" in namespace "secrets-2805" to be "success or failure" Jan 22 13:22:41.139: INFO: Pod "pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934": Phase="Pending", Reason="", readiness=false. Elapsed: 12.433936ms Jan 22 13:22:43.157: INFO: Pod "pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029493187s Jan 22 13:22:45.167: INFO: Pod "pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040002572s Jan 22 13:22:47.212: INFO: Pod "pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085286009s Jan 22 13:22:49.221: INFO: Pod "pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093562064s Jan 22 13:22:51.230: INFO: Pod "pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102595765s STEP: Saw pod success Jan 22 13:22:51.230: INFO: Pod "pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934" satisfied condition "success or failure" Jan 22 13:22:51.234: INFO: Trying to get logs from node iruya-node pod pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934 container secret-volume-test: STEP: delete the pod Jan 22 13:22:51.298: INFO: Waiting for pod pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934 to disappear Jan 22 13:22:51.306: INFO: Pod pod-secrets-b2b9aa3d-e0df-476b-bee9-1f45aa73c934 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:22:51.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2805" for this suite. Jan 22 13:22:57.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:22:57.546: INFO: namespace secrets-2805 deletion completed in 6.146559684s • [SLOW TEST:16.584 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:22:57.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-73ac610a-ba68-4f49-a550-e63bad9bd269 STEP: Creating a pod to test consume configMaps Jan 22 13:22:57.852: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d" in namespace "projected-4860" to be "success or failure" Jan 22 13:22:57.872: INFO: Pod "pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.226916ms Jan 22 13:22:59.886: INFO: Pod "pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033361863s Jan 22 13:23:01.903: INFO: Pod "pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050922337s Jan 22 13:23:03.917: INFO: Pod "pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064416885s Jan 22 13:23:05.951: INFO: Pod "pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098399918s STEP: Saw pod success Jan 22 13:23:05.951: INFO: Pod "pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d" satisfied condition "success or failure" Jan 22 13:23:05.962: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d container projected-configmap-volume-test: STEP: delete the pod Jan 22 13:23:06.168: INFO: Waiting for pod pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d to disappear Jan 22 13:23:06.221: INFO: Pod pod-projected-configmaps-fb364423-9092-4608-8c1c-b67c67db142d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:23:06.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4860" for this suite. Jan 22 13:23:12.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:23:12.394: INFO: namespace projected-4860 deletion completed in 6.16014484s • [SLOW TEST:14.847 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:23:12.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-jbpq STEP: Creating a pod to test atomic-volume-subpath Jan 22 13:23:12.575: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jbpq" in namespace "subpath-2021" to be "success or failure" Jan 22 13:23:12.583: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Pending", Reason="", readiness=false. Elapsed: 7.57756ms Jan 22 13:23:14.605: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029572735s Jan 22 13:23:16.611: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036296871s Jan 22 13:23:18.642: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066612857s Jan 22 13:23:20.649: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 8.074228783s Jan 22 13:23:22.674: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 10.098906999s Jan 22 13:23:24.687: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 12.111956311s Jan 22 13:23:26.696: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 14.120975633s Jan 22 13:23:28.707: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 16.132152319s Jan 22 13:23:30.721: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 18.145565242s Jan 22 13:23:32.727: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 20.151766459s Jan 22 13:23:34.738: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 22.16254384s Jan 22 13:23:36.746: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 24.170464109s Jan 22 13:23:38.754: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 26.179270275s Jan 22 13:23:40.763: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Running", Reason="", readiness=true. Elapsed: 28.187539103s Jan 22 13:23:42.773: INFO: Pod "pod-subpath-test-configmap-jbpq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.197643162s STEP: Saw pod success Jan 22 13:23:42.773: INFO: Pod "pod-subpath-test-configmap-jbpq" satisfied condition "success or failure" Jan 22 13:23:42.777: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-jbpq container test-container-subpath-configmap-jbpq: STEP: delete the pod Jan 22 13:23:42.820: INFO: Waiting for pod pod-subpath-test-configmap-jbpq to disappear Jan 22 13:23:42.879: INFO: Pod pod-subpath-test-configmap-jbpq no longer exists STEP: Deleting pod pod-subpath-test-configmap-jbpq Jan 22 13:23:42.879: INFO: Deleting pod "pod-subpath-test-configmap-jbpq" in namespace "subpath-2021" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:23:42.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2021" for this suite. Jan 22 13:23:48.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:23:49.137: INFO: namespace subpath-2021 deletion completed in 6.249185405s • [SLOW TEST:36.742 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:23:49.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 13:23:49.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1827' Jan 22 13:23:49.327: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 22 13:23:49.327: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 22 13:23:49.343: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-csl5m] Jan 22 13:23:49.343: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-csl5m" in namespace "kubectl-1827" to be "running and ready" Jan 22 13:23:49.352: INFO: Pod "e2e-test-nginx-rc-csl5m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.468835ms Jan 22 13:23:51.362: INFO: Pod "e2e-test-nginx-rc-csl5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018602962s Jan 22 13:23:53.373: INFO: Pod "e2e-test-nginx-rc-csl5m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030210109s Jan 22 13:23:55.382: INFO: Pod "e2e-test-nginx-rc-csl5m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038679465s Jan 22 13:23:57.392: INFO: Pod "e2e-test-nginx-rc-csl5m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048466116s Jan 22 13:23:59.400: INFO: Pod "e2e-test-nginx-rc-csl5m": Phase="Running", Reason="", readiness=true. Elapsed: 10.056852453s Jan 22 13:23:59.400: INFO: Pod "e2e-test-nginx-rc-csl5m" satisfied condition "running and ready" Jan 22 13:23:59.400: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-csl5m] Jan 22 13:23:59.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1827' Jan 22 13:23:59.638: INFO: stderr: "" Jan 22 13:23:59.638: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 22 13:23:59.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1827' Jan 22 13:23:59.813: INFO: stderr: "" Jan 22 13:23:59.813: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:23:59.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1827" for this suite. Jan 22 13:24:21.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:24:21.979: INFO: namespace kubectl-1827 deletion completed in 22.161659328s • [SLOW TEST:32.842 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:24:21.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0122 13:24:52.665676 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 22 13:24:52.665: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:24:52.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8665" for this suite. Jan 22 13:24:58.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:24:59.171: INFO: namespace gc-8665 deletion completed in 6.498506728s • [SLOW TEST:37.191 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:24:59.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f8b6082b-132d-40cb-a114-fbf2beac9e83 STEP: Creating a pod to test consume secrets Jan 22 13:24:59.588: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485" in namespace "projected-4354" to be "success or failure" Jan 22 13:24:59.830: INFO: Pod "pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485": Phase="Pending", Reason="", readiness=false. Elapsed: 241.994116ms Jan 22 13:25:02.115: INFO: Pod "pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485": Phase="Pending", Reason="", readiness=false. Elapsed: 2.527287099s Jan 22 13:25:04.130: INFO: Pod "pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485": Phase="Pending", Reason="", readiness=false. Elapsed: 4.542345457s Jan 22 13:25:06.139: INFO: Pod "pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550967096s Jan 22 13:25:08.150: INFO: Pod "pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562395626s Jan 22 13:25:10.161: INFO: Pod "pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.572978958s STEP: Saw pod success Jan 22 13:25:10.161: INFO: Pod "pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485" satisfied condition "success or failure" Jan 22 13:25:10.168: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485 container projected-secret-volume-test: STEP: delete the pod Jan 22 13:25:10.217: INFO: Waiting for pod pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485 to disappear Jan 22 13:25:10.233: INFO: Pod pod-projected-secrets-58efcb0a-aa7a-4bc5-9e35-869d83d44485 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:25:10.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4354" for this suite. Jan 22 13:25:16.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:25:16.440: INFO: namespace projected-4354 deletion completed in 6.149583873s • [SLOW TEST:17.268 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:25:16.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 22 13:25:25.170: INFO: Successfully updated pod "pod-update-activedeadlineseconds-20a0eeb0-5609-4387-8502-6984a9811a58" Jan 22 13:25:25.170: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-20a0eeb0-5609-4387-8502-6984a9811a58" in namespace "pods-4357" to be "terminated due to deadline exceeded" Jan 22 13:25:25.188: INFO: Pod "pod-update-activedeadlineseconds-20a0eeb0-5609-4387-8502-6984a9811a58": Phase="Running", Reason="", readiness=true. Elapsed: 18.50918ms Jan 22 13:25:27.196: INFO: Pod "pod-update-activedeadlineseconds-20a0eeb0-5609-4387-8502-6984a9811a58": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.025741586s Jan 22 13:25:27.196: INFO: Pod "pod-update-activedeadlineseconds-20a0eeb0-5609-4387-8502-6984a9811a58" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:25:27.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4357" for this suite. Jan 22 13:25:35.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:25:35.399: INFO: namespace pods-4357 deletion completed in 8.197436968s • [SLOW TEST:18.959 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:25:35.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4c702a6b-08bc-4771-a8f4-6cb5a784fb61 STEP: Creating a pod to test consume secrets Jan 22 13:25:35.549: INFO: Waiting up to 5m0s for pod "pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a" in namespace "secrets-1720" to be "success or failure" Jan 22 13:25:35.570: INFO: Pod "pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.361687ms Jan 22 13:25:37.581: INFO: Pod "pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031893203s Jan 22 13:25:39.619: INFO: Pod "pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070109348s Jan 22 13:25:41.631: INFO: Pod "pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082213134s Jan 22 13:25:43.639: INFO: Pod "pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090435746s Jan 22 13:25:45.649: INFO: Pod "pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100004881s STEP: Saw pod success Jan 22 13:25:45.649: INFO: Pod "pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a" satisfied condition "success or failure" Jan 22 13:25:45.654: INFO: Trying to get logs from node iruya-node pod pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a container secret-env-test: STEP: delete the pod Jan 22 13:25:45.869: INFO: Waiting for pod pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a to disappear Jan 22 13:25:45.896: INFO: Pod pod-secrets-5937fa16-ed73-493c-86e9-628842d8154a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:25:45.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1720" for this suite. Jan 22 13:25:51.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:25:52.063: INFO: namespace secrets-1720 deletion completed in 6.150207596s • [SLOW TEST:16.663 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:25:52.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:25:52.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1516" for this suite. Jan 22 13:25:58.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:25:58.445: INFO: namespace kubelet-test-1516 deletion completed in 6.171462034s • [SLOW TEST:6.382 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:25:58.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jan 22 13:25:58.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 22 13:25:58.723: INFO: stderr: "" Jan 22 13:25:58.723: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:25:58.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5024" for this suite. Jan 22 13:26:04.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:26:04.836: INFO: namespace kubectl-5024 deletion completed in 6.103647794s • [SLOW TEST:6.391 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:26:04.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 22 13:26:13.626: INFO: Successfully updated pod "labelsupdatec32d3820-10c0-4796-97eb-bcf8a671b94d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:26:17.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-876" for this suite. Jan 22 13:26:39.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:26:39.938: INFO: namespace projected-876 deletion completed in 22.209777428s • [SLOW TEST:35.101 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:26:39.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1ebf9d6c-1fc8-47b0-8310-7b3347858ea3 STEP: Creating a pod to test consume configMaps Jan 22 13:26:40.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9" in namespace "configmap-5477" to be "success or failure" Jan 22 13:26:40.118: INFO: Pod "pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.434739ms Jan 22 13:26:42.140: INFO: Pod "pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050776928s Jan 22 13:26:44.150: INFO: Pod "pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060325565s Jan 22 13:26:46.159: INFO: Pod "pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069702447s Jan 22 13:26:48.170: INFO: Pod "pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080683164s Jan 22 13:26:50.179: INFO: Pod "pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089332098s STEP: Saw pod success Jan 22 13:26:50.179: INFO: Pod "pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9" satisfied condition "success or failure" Jan 22 13:26:50.184: INFO: Trying to get logs from node iruya-node pod pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9 container configmap-volume-test: STEP: delete the pod Jan 22 13:26:50.271: INFO: Waiting for pod pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9 to disappear Jan 22 13:26:50.279: INFO: Pod pod-configmaps-baeb6998-c971-4d7d-84fa-5e0c7d5ef6a9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:26:50.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5477" for this suite. Jan 22 13:26:56.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:26:56.479: INFO: namespace configmap-5477 deletion completed in 6.191909477s • [SLOW TEST:16.542 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:26:56.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 13:26:56.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3125' Jan 22 13:26:56.719: INFO: stderr: "" Jan 22 13:26:56.719: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 22 13:27:06.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3125 -o json' Jan 22 13:27:06.952: INFO: stderr: "" Jan 22 13:27:06.952: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-22T13:26:56Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-3125\",\n \"resourceVersion\": \"21433488\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3125/pods/e2e-test-nginx-pod\",\n \"uid\": \"ade4cdb6-a479-436a-919e-d543ec1bee23\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6gvwd\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6gvwd\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6gvwd\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-22T13:26:56Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-22T13:27:03Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-22T13:27:03Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-22T13:26:56Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://0d57f5402d6163375d1c639806b70ba13bdbdd71497a998cb4a925b5c2d192aa\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-22T13:27:03Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-22T13:26:56Z\"\n }\n}\n" STEP: replace the image in the pod Jan 22 13:27:06.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3125' Jan 22 13:27:07.416: INFO: stderr: "" Jan 22 13:27:07.416: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jan 22 13:27:07.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3125' Jan 22 13:27:13.730: INFO: stderr: "" Jan 22 13:27:13.730: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:27:13.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3125" for this suite. Jan 22 13:27:19.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:27:19.937: INFO: namespace kubectl-3125 deletion completed in 6.180901114s • [SLOW TEST:23.457 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:27:19.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 13:27:20.060: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 22 13:27:25.072: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 22 13:27:29.089: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 22 13:27:31.098: INFO: Creating deployment "test-rollover-deployment" Jan 22 13:27:31.115: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 22 13:27:33.135: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 22 13:27:33.140: INFO: Ensure that both replica sets have 1 created replica Jan 22 13:27:33.144: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 22 13:27:33.154: INFO: Updating deployment test-rollover-deployment Jan 22 13:27:33.154: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 22 13:27:35.178: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 22 13:27:35.200: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 22 13:27:35.218: INFO: all replica sets need to contain the pod-template-hash label Jan 22 13:27:35.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296453, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 13:27:37.231: INFO: all replica sets need to contain the pod-template-hash label Jan 22 13:27:37.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296453, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 13:27:39.436: INFO: all replica sets need to contain the pod-template-hash label Jan 22 13:27:39.436: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296453, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 13:27:41.249: INFO: all replica sets need to contain the pod-template-hash label Jan 22 13:27:41.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296460, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 13:27:43.235: INFO: all replica sets need to contain the pod-template-hash label Jan 22 13:27:43.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296460, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 13:27:45.232: INFO: all replica sets need to contain the pod-template-hash label Jan 22 13:27:45.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296460, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 13:27:47.235: INFO: all replica sets need to contain the pod-template-hash label Jan 22 13:27:47.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296460, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 13:27:49.242: INFO: all replica sets need to contain the pod-template-hash label Jan 22 13:27:49.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296460, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715296451, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 13:27:51.233: INFO: Jan 22 13:27:51.233: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 22 13:27:51.249: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-1442,SelfLink:/apis/apps/v1/namespaces/deployment-1442/deployments/test-rollover-deployment,UID:c586df61-8e04-4b17-ba99-c49abb25ead0,ResourceVersion:21433647,Generation:2,CreationTimestamp:2020-01-22 13:27:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-22 13:27:31 +0000 UTC 2020-01-22 13:27:31 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-22 13:27:50 +0000 UTC 2020-01-22 13:27:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 22 13:27:51.255: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-1442,SelfLink:/apis/apps/v1/namespaces/deployment-1442/replicasets/test-rollover-deployment-854595fc44,UID:0ed2b67e-2ed0-48da-acec-3ca3b5ac6d58,ResourceVersion:21433635,Generation:2,CreationTimestamp:2020-01-22 13:27:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c586df61-8e04-4b17-ba99-c49abb25ead0 0xc002d7be17 0xc002d7be18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 22 13:27:51.255: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 22 13:27:51.255: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-1442,SelfLink:/apis/apps/v1/namespaces/deployment-1442/replicasets/test-rollover-controller,UID:478c9205-24b3-411d-b23a-baa7e88e3a5c,ResourceVersion:21433646,Generation:2,CreationTimestamp:2020-01-22 13:27:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c586df61-8e04-4b17-ba99-c49abb25ead0 0xc002d7bd47 0xc002d7bd48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 22 13:27:51.256: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-1442,SelfLink:/apis/apps/v1/namespaces/deployment-1442/replicasets/test-rollover-deployment-9b8b997cf,UID:d4c52e6c-6ec5-4c1c-8839-1c9a80091b72,ResourceVersion:21433602,Generation:2,CreationTimestamp:2020-01-22 13:27:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c586df61-8e04-4b17-ba99-c49abb25ead0 0xc002d7bee0 0xc002d7bee1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 22 13:27:51.263: INFO: Pod "test-rollover-deployment-854595fc44-mqbw7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-mqbw7,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-1442,SelfLink:/api/v1/namespaces/deployment-1442/pods/test-rollover-deployment-854595fc44-mqbw7,UID:423e318a-cbd9-4df6-848b-98096703e4d0,ResourceVersion:21433619,Generation:0,CreationTimestamp:2020-01-22 13:27:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 0ed2b67e-2ed0-48da-acec-3ca3b5ac6d58 0xc00259e5b7 0xc00259e5b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qccbc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qccbc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-qccbc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00259e620} {node.kubernetes.io/unreachable Exists NoExecute 0xc00259e640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:27:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:27:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:27:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:27:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-22 13:27:33 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-22 13:27:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://c5601a8024bfaefae2aff98871f37117d668c4a773a2f07aecd67256d5d18295}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:27:51.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1442" for this suite. Jan 22 13:27:59.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:27:59.949: INFO: namespace deployment-1442 deletion completed in 8.680938841s • [SLOW TEST:40.011 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:27:59.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 22 13:28:00.505: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 22 13:28:00.570: INFO: Waiting for terminating namespaces to be deleted... Jan 22 13:28:00.576: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 22 13:28:00.601: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 22 13:28:00.601: INFO: Container kube-proxy ready: true, restart count 0 Jan 22 13:28:00.601: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 22 13:28:00.601: INFO: Container weave ready: true, restart count 0 Jan 22 13:28:00.601: INFO: Container weave-npc ready: true, restart count 0 Jan 22 13:28:00.601: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 22 13:28:00.633: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 22 13:28:00.633: INFO: Container coredns ready: true, restart count 0 Jan 22 13:28:00.633: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 22 13:28:00.633: INFO: Container etcd ready: true, restart count 0 Jan 22 13:28:00.633: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 22 13:28:00.633: INFO: Container weave ready: true, restart count 0 Jan 22 13:28:00.633: INFO: Container weave-npc ready: true, restart count 0 Jan 22 13:28:00.633: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 22 13:28:00.633: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 22 13:28:00.633: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 22 13:28:00.633: INFO: Container kube-proxy ready: true, restart count 0 Jan 22 13:28:00.633: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 22 13:28:00.633: INFO: Container kube-apiserver ready: true, restart count 0 Jan 22 13:28:00.633: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 22 13:28:00.633: INFO: Container kube-scheduler ready: true, restart count 13 Jan 22 13:28:00.633: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 22 13:28:00.633: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e33a80dd-b2b3-4b7a-8bcc-52b8e52b4a8f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e33a80dd-b2b3-4b7a-8bcc-52b8e52b4a8f off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-e33a80dd-b2b3-4b7a-8bcc-52b8e52b4a8f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:28:19.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4265" for this suite. Jan 22 13:28:37.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:28:37.284: INFO: namespace sched-pred-4265 deletion completed in 18.154445746s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:37.335 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:28:37.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-28fd52bb-ba99-4e9c-8a7f-6084afed4440 STEP: Creating a pod to test consume configMaps Jan 22 13:28:37.436: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab" in namespace "configmap-5016" to be "success or failure" Jan 22 13:28:37.462: INFO: Pod "pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab": Phase="Pending", Reason="", readiness=false. Elapsed: 26.109226ms Jan 22 13:28:39.470: INFO: Pod "pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033753308s Jan 22 13:28:41.475: INFO: Pod "pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039212058s Jan 22 13:28:43.518: INFO: Pod "pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082285225s Jan 22 13:28:45.525: INFO: Pod "pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088811036s Jan 22 13:28:47.563: INFO: Pod "pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126714025s STEP: Saw pod success Jan 22 13:28:47.563: INFO: Pod "pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab" satisfied condition "success or failure" Jan 22 13:28:47.569: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab container configmap-volume-test: STEP: delete the pod Jan 22 13:28:47.817: INFO: Waiting for pod pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab to disappear Jan 22 13:28:47.832: INFO: Pod pod-configmaps-c1b46175-cd48-49cf-ac17-ab4245991cab no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:28:47.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5016" for this suite. Jan 22 13:28:53.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:28:54.071: INFO: namespace configmap-5016 deletion completed in 6.220833303s • [SLOW TEST:16.786 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:28:54.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-glrn STEP: Creating a pod to test atomic-volume-subpath Jan 22 13:28:54.244: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-glrn" in namespace "subpath-6469" to be "success or failure" Jan 22 13:28:54.254: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.148402ms Jan 22 13:28:56.266: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021879676s Jan 22 13:28:58.274: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029452374s Jan 22 13:29:00.282: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037336343s Jan 22 13:29:02.288: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044211174s Jan 22 13:29:04.299: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 10.054755089s Jan 22 13:29:06.313: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 12.068599975s Jan 22 13:29:08.321: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 14.077249983s Jan 22 13:29:10.335: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 16.090914477s Jan 22 13:29:12.350: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 18.106164314s Jan 22 13:29:14.363: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 20.118997249s Jan 22 13:29:16.376: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 22.131862752s Jan 22 13:29:18.387: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 24.142694288s Jan 22 13:29:20.397: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 26.152469567s Jan 22 13:29:22.406: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Running", Reason="", readiness=true. Elapsed: 28.16161604s Jan 22 13:29:24.414: INFO: Pod "pod-subpath-test-configmap-glrn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.169743326s STEP: Saw pod success Jan 22 13:29:24.414: INFO: Pod "pod-subpath-test-configmap-glrn" satisfied condition "success or failure" Jan 22 13:29:24.420: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-glrn container test-container-subpath-configmap-glrn: STEP: delete the pod Jan 22 13:29:24.865: INFO: Waiting for pod pod-subpath-test-configmap-glrn to disappear Jan 22 13:29:24.902: INFO: Pod pod-subpath-test-configmap-glrn no longer exists STEP: Deleting pod pod-subpath-test-configmap-glrn Jan 22 13:29:24.902: INFO: Deleting pod "pod-subpath-test-configmap-glrn" in namespace "subpath-6469" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:29:24.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6469" for this suite. Jan 22 13:29:30.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:29:31.054: INFO: namespace subpath-6469 deletion completed in 6.14253755s • [SLOW TEST:36.983 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:29:31.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 22 13:29:31.101: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:29:44.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8638" for this suite. Jan 22 13:29:50.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:29:50.914: INFO: namespace init-container-8638 deletion completed in 6.274192557s • [SLOW TEST:19.860 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:29:50.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jan 22 13:29:51.037: INFO: Waiting up to 5m0s for pod "client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b" in namespace "containers-6103" to be "success or failure" Jan 22 13:29:51.055: INFO: Pod "client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.166249ms Jan 22 13:29:53.068: INFO: Pod "client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030744547s Jan 22 13:29:55.088: INFO: Pod "client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050737013s Jan 22 13:29:57.097: INFO: Pod "client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059621349s Jan 22 13:29:59.114: INFO: Pod "client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077213066s STEP: Saw pod success Jan 22 13:29:59.115: INFO: Pod "client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b" satisfied condition "success or failure" Jan 22 13:29:59.119: INFO: Trying to get logs from node iruya-node pod client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b container test-container: STEP: delete the pod Jan 22 13:29:59.186: INFO: Waiting for pod client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b to disappear Jan 22 13:29:59.195: INFO: Pod client-containers-8778a1c3-d5ac-4116-a7cd-17df2d1d4c2b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:29:59.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6103" for this suite. Jan 22 13:30:05.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:30:05.435: INFO: namespace containers-6103 deletion completed in 6.235181748s • [SLOW TEST:14.520 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:30:05.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-2a96142d-1d7f-447f-bc41-e7735d6562b5 STEP: Creating a pod to test consume secrets Jan 22 13:30:05.555: INFO: Waiting up to 5m0s for pod "pod-secrets-4273a237-0587-4749-906a-9796c16777c3" in namespace "secrets-2966" to be "success or failure" Jan 22 13:30:05.577: INFO: Pod "pod-secrets-4273a237-0587-4749-906a-9796c16777c3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.935867ms Jan 22 13:30:07.584: INFO: Pod "pod-secrets-4273a237-0587-4749-906a-9796c16777c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029104331s Jan 22 13:30:09.593: INFO: Pod "pod-secrets-4273a237-0587-4749-906a-9796c16777c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038025127s Jan 22 13:30:11.602: INFO: Pod "pod-secrets-4273a237-0587-4749-906a-9796c16777c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04643063s Jan 22 13:30:13.615: INFO: Pod "pod-secrets-4273a237-0587-4749-906a-9796c16777c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059970144s STEP: Saw pod success Jan 22 13:30:13.615: INFO: Pod "pod-secrets-4273a237-0587-4749-906a-9796c16777c3" satisfied condition "success or failure" Jan 22 13:30:13.619: INFO: Trying to get logs from node iruya-node pod pod-secrets-4273a237-0587-4749-906a-9796c16777c3 container secret-volume-test: STEP: delete the pod Jan 22 13:30:13.859: INFO: Waiting for pod pod-secrets-4273a237-0587-4749-906a-9796c16777c3 to disappear Jan 22 13:30:13.871: INFO: Pod pod-secrets-4273a237-0587-4749-906a-9796c16777c3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:30:13.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2966" for this suite. Jan 22 13:30:19.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:30:20.041: INFO: namespace secrets-2966 deletion completed in 6.159953121s • [SLOW TEST:14.605 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:30:20.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 13:30:20.137: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:30:30.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2277" for this suite. Jan 22 13:31:12.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:31:12.406: INFO: namespace pods-2277 deletion completed in 42.139334261s • [SLOW TEST:52.364 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:31:12.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-37c66c5e-a787-4c30-9fad-0ea09a22371e in namespace container-probe-4449 Jan 22 13:31:20.561: INFO: Started pod test-webserver-37c66c5e-a787-4c30-9fad-0ea09a22371e in namespace container-probe-4449 STEP: checking the pod's current state and verifying that restartCount is present Jan 22 13:31:20.569: INFO: Initial restart count of pod test-webserver-37c66c5e-a787-4c30-9fad-0ea09a22371e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:35:22.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4449" for this suite. Jan 22 13:35:28.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:35:28.680: INFO: namespace container-probe-4449 deletion completed in 6.279566239s • [SLOW TEST:256.274 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:35:28.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 22 13:35:28.740: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:35:46.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2949" for this suite. Jan 22 13:36:08.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:36:08.811: INFO: namespace init-container-2949 deletion completed in 22.157589877s • [SLOW TEST:40.131 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:36:08.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 22 13:36:08.886: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:36:22.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-164" for this suite. Jan 22 13:36:28.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:36:28.807: INFO: namespace init-container-164 deletion completed in 6.130902343s • [SLOW TEST:19.996 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:36:28.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:36:28.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b" in namespace "projected-4181" to be "success or failure" Jan 22 13:36:28.922: INFO: Pod "downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.039057ms Jan 22 13:36:30.934: INFO: Pod "downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055690449s Jan 22 13:36:32.944: INFO: Pod "downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065760226s Jan 22 13:36:34.957: INFO: Pod "downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079079766s Jan 22 13:36:36.967: INFO: Pod "downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088804627s STEP: Saw pod success Jan 22 13:36:36.967: INFO: Pod "downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b" satisfied condition "success or failure" Jan 22 13:36:36.970: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b container client-container: STEP: delete the pod Jan 22 13:36:37.004: INFO: Waiting for pod downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b to disappear Jan 22 13:36:37.010: INFO: Pod downwardapi-volume-fb150c51-a828-49bf-80ab-5849a515ef3b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:36:37.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4181" for this suite. Jan 22 13:36:43.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:36:43.294: INFO: namespace projected-4181 deletion completed in 6.274655845s • [SLOW TEST:14.487 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:36:43.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jan 22 13:36:43.524: INFO: Waiting up to 5m0s for pod "client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb" in namespace "containers-5699" to be "success or failure" Jan 22 13:36:43.535: INFO: Pod "client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.591451ms Jan 22 13:36:45.544: INFO: Pod "client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019911799s Jan 22 13:36:47.601: INFO: Pod "client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076815761s Jan 22 13:36:49.607: INFO: Pod "client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083122453s Jan 22 13:36:51.619: INFO: Pod "client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095280649s STEP: Saw pod success Jan 22 13:36:51.619: INFO: Pod "client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb" satisfied condition "success or failure" Jan 22 13:36:51.624: INFO: Trying to get logs from node iruya-node pod client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb container test-container: STEP: delete the pod Jan 22 13:36:51.698: INFO: Waiting for pod client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb to disappear Jan 22 13:36:51.705: INFO: Pod client-containers-3c97cd22-cebd-44ae-90e9-40d49baa5cfb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:36:51.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5699" for this suite. Jan 22 13:36:57.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:36:57.906: INFO: namespace containers-5699 deletion completed in 6.157846606s • [SLOW TEST:14.610 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:36:57.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-d2a33b85-8bcc-4ab4-b224-5fdf63701a1c STEP: Creating secret with name secret-projected-all-test-volume-22599b9f-339a-4ce7-92e4-9c8c2930379c STEP: Creating a pod to test Check all projections for projected volume plugin Jan 22 13:36:58.030: INFO: Waiting up to 5m0s for pod "projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f" in namespace "projected-4877" to be "success or failure" Jan 22 13:36:58.052: INFO: Pod "projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.762084ms Jan 22 13:37:00.067: INFO: Pod "projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037420285s Jan 22 13:37:02.076: INFO: Pod "projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046541168s Jan 22 13:37:04.087: INFO: Pod "projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057413196s Jan 22 13:37:06.093: INFO: Pod "projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063557669s STEP: Saw pod success Jan 22 13:37:06.094: INFO: Pod "projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f" satisfied condition "success or failure" Jan 22 13:37:06.096: INFO: Trying to get logs from node iruya-node pod projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f container projected-all-volume-test: STEP: delete the pod Jan 22 13:37:06.217: INFO: Waiting for pod projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f to disappear Jan 22 13:37:06.221: INFO: Pod projected-volume-b5e1b887-1b75-46dc-b5f2-bf2573f7b27f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:37:06.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4877" for this suite. Jan 22 13:37:12.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:37:12.375: INFO: namespace projected-4877 deletion completed in 6.150024918s • [SLOW TEST:14.469 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:37:12.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 22 13:37:12.523: INFO: Waiting up to 5m0s for pod "pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3" in namespace "emptydir-9443" to be "success or failure" Jan 22 13:37:12.537: INFO: Pod "pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.732505ms Jan 22 13:37:14.553: INFO: Pod "pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029702451s Jan 22 13:37:16.572: INFO: Pod "pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048600171s Jan 22 13:37:18.593: INFO: Pod "pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069257691s Jan 22 13:37:20.608: INFO: Pod "pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084896979s STEP: Saw pod success Jan 22 13:37:20.608: INFO: Pod "pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3" satisfied condition "success or failure" Jan 22 13:37:20.671: INFO: Trying to get logs from node iruya-node pod pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3 container test-container: STEP: delete the pod Jan 22 13:37:20.726: INFO: Waiting for pod pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3 to disappear Jan 22 13:37:20.735: INFO: Pod pod-282fb24c-d50d-4816-b26e-80e9a81cfdc3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:37:20.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9443" for this suite. Jan 22 13:37:26.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:37:26.913: INFO: namespace emptydir-9443 deletion completed in 6.171254072s • [SLOW TEST:14.537 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:37:26.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-61e4553f-f4b5-4f6b-b134-846ea6fbb22b STEP: Creating a pod to test consume configMaps Jan 22 13:37:27.068: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4" in namespace "configmap-6579" to be "success or failure" Jan 22 13:37:27.077: INFO: Pod "pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834305ms Jan 22 13:37:29.085: INFO: Pod "pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017131548s Jan 22 13:37:31.096: INFO: Pod "pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028022004s Jan 22 13:37:33.108: INFO: Pod "pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040589157s Jan 22 13:37:35.125: INFO: Pod "pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057586633s Jan 22 13:37:37.135: INFO: Pod "pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067471558s STEP: Saw pod success Jan 22 13:37:37.135: INFO: Pod "pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4" satisfied condition "success or failure" Jan 22 13:37:37.141: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4 container configmap-volume-test: STEP: delete the pod Jan 22 13:37:37.190: INFO: Waiting for pod pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4 to disappear Jan 22 13:37:37.196: INFO: Pod pod-configmaps-bb9eec58-9d53-4be7-856f-0067e6ce92b4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:37:37.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6579" for this suite. Jan 22 13:37:43.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:37:43.510: INFO: namespace configmap-6579 deletion completed in 6.30715693s • [SLOW TEST:16.595 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:37:43.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jan 22 13:37:43.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7051 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 22 13:37:56.213: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0122 13:37:54.432695 1815 log.go:172] (0xc000b149a0) (0xc000581360) Create stream\nI0122 13:37:54.432760 1815 log.go:172] (0xc000b149a0) (0xc000581360) Stream added, broadcasting: 1\nI0122 13:37:54.438737 1815 log.go:172] (0xc000b149a0) Reply frame received for 1\nI0122 13:37:54.438769 1815 log.go:172] (0xc000b149a0) (0xc000581400) Create stream\nI0122 13:37:54.438779 1815 log.go:172] (0xc000b149a0) (0xc000581400) Stream added, broadcasting: 3\nI0122 13:37:54.442311 1815 log.go:172] (0xc000b149a0) Reply frame received for 3\nI0122 13:37:54.442354 1815 log.go:172] (0xc000b149a0) (0xc00099a000) Create stream\nI0122 13:37:54.442369 1815 log.go:172] (0xc000b149a0) (0xc00099a000) Stream added, broadcasting: 5\nI0122 13:37:54.444628 1815 log.go:172] (0xc000b149a0) Reply frame received for 5\nI0122 13:37:54.444677 1815 log.go:172] (0xc000b149a0) (0xc0005814a0) Create stream\nI0122 13:37:54.444697 1815 log.go:172] (0xc000b149a0) (0xc0005814a0) Stream added, broadcasting: 7\nI0122 13:37:54.447606 1815 log.go:172] (0xc000b149a0) Reply frame received for 7\nI0122 13:37:54.447737 1815 log.go:172] (0xc000581400) (3) Writing data frame\nI0122 13:37:54.447908 1815 log.go:172] (0xc000581400) (3) Writing data frame\nI0122 13:37:54.455374 1815 log.go:172] (0xc000b149a0) Data frame received for 5\nI0122 13:37:54.455453 1815 log.go:172] (0xc00099a000) (5) Data frame handling\nI0122 13:37:54.455485 1815 log.go:172] (0xc00099a000) (5) Data frame sent\nI0122 13:37:54.462854 1815 log.go:172] (0xc000b149a0) Data frame received for 5\nI0122 13:37:54.462869 1815 log.go:172] (0xc00099a000) (5) Data frame handling\nI0122 13:37:54.462886 1815 log.go:172] (0xc00099a000) (5) Data frame sent\nI0122 13:37:56.179366 1815 log.go:172] (0xc000b149a0) Data frame received for 1\nI0122 13:37:56.179668 1815 log.go:172] (0xc000b149a0) (0xc000581400) Stream removed, broadcasting: 3\nI0122 13:37:56.180038 1815 log.go:172] (0xc000581360) (1) Data frame handling\nI0122 13:37:56.180090 1815 log.go:172] (0xc000581360) (1) Data frame sent\nI0122 13:37:56.180131 1815 log.go:172] (0xc000b149a0) (0xc000581360) Stream removed, broadcasting: 1\nI0122 13:37:56.180639 1815 log.go:172] (0xc000b149a0) (0xc00099a000) Stream removed, broadcasting: 5\nI0122 13:37:56.180735 1815 log.go:172] (0xc000b149a0) (0xc0005814a0) Stream removed, broadcasting: 7\nI0122 13:37:56.180780 1815 log.go:172] (0xc000b149a0) Go away received\nI0122 13:37:56.180840 1815 log.go:172] (0xc000b149a0) (0xc000581360) Stream removed, broadcasting: 1\nI0122 13:37:56.180871 1815 log.go:172] (0xc000b149a0) (0xc000581400) Stream removed, broadcasting: 3\nI0122 13:37:56.180884 1815 log.go:172] (0xc000b149a0) (0xc00099a000) Stream removed, broadcasting: 5\nI0122 13:37:56.180898 1815 log.go:172] (0xc000b149a0) (0xc0005814a0) Stream removed, broadcasting: 7\n" Jan 22 13:37:56.213: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:37:58.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7051" for this suite. Jan 22 13:38:04.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:38:04.357: INFO: namespace kubectl-7051 deletion completed in 6.124835616s • [SLOW TEST:20.847 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:38:04.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 22 13:38:12.464: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c0b10479-b94c-4f7e-8bf1-b80d664240a4,GenerateName:,Namespace:events-1136,SelfLink:/api/v1/namespaces/events-1136/pods/send-events-c0b10479-b94c-4f7e-8bf1-b80d664240a4,UID:72984509-1646-499b-8eeb-19cf164938e9,ResourceVersion:21434960,Generation:0,CreationTimestamp:2020-01-22 13:38:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 415798684,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dcx55 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dcx55,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-dcx55 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00258ad90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00258adb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:38:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:38:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:38:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:38:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-22 13:38:04 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-22 13:38:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://47faa2f3d4df62df9a55e60e480f7673089a256d8942ffa52685261a5f708f7a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 22 13:38:14.475: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 22 13:38:16.487: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:38:16.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1136" for this suite. Jan 22 13:38:58.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:38:58.678: INFO: namespace events-1136 deletion completed in 42.164600701s • [SLOW TEST:54.321 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:38:58.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:39:07.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3122" for this suite. Jan 22 13:39:30.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:39:30.160: INFO: namespace replication-controller-3122 deletion completed in 22.165141406s • [SLOW TEST:31.481 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:39:30.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 22 13:39:30.301: INFO: Waiting up to 5m0s for pod "downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0" in namespace "downward-api-8796" to be "success or failure" Jan 22 13:39:30.310: INFO: Pod "downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.625377ms Jan 22 13:39:32.334: INFO: Pod "downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032827842s Jan 22 13:39:35.035: INFO: Pod "downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.733602324s Jan 22 13:39:37.109: INFO: Pod "downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.807013197s Jan 22 13:39:39.122: INFO: Pod "downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.820952785s STEP: Saw pod success Jan 22 13:39:39.123: INFO: Pod "downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0" satisfied condition "success or failure" Jan 22 13:39:39.128: INFO: Trying to get logs from node iruya-node pod downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0 container dapi-container: STEP: delete the pod Jan 22 13:39:39.237: INFO: Waiting for pod downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0 to disappear Jan 22 13:39:39.244: INFO: Pod downward-api-f8bffa1e-0a9f-40fd-97a9-812509853ea0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:39:39.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8796" for this suite. Jan 22 13:39:45.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:39:45.467: INFO: namespace downward-api-8796 deletion completed in 6.217418476s • [SLOW TEST:15.307 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:39:45.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 22 13:39:45.580: INFO: PodSpec: initContainers in spec.initContainers Jan 22 13:40:47.405: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4fc29962-dd5f-49ee-bf44-6612c5365060", GenerateName:"", Namespace:"init-container-2192", SelfLink:"/api/v1/namespaces/init-container-2192/pods/pod-init-4fc29962-dd5f-49ee-bf44-6612c5365060", UID:"280adfea-1c39-429b-9338-8aff44f80be4", ResourceVersion:"21435260", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715297185, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"580356682"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-js47n", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001309180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-js47n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-js47n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-js47n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002c00a78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024a7c80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c00b00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c00b20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002c00b28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002c00b2c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715297185, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715297185, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715297185, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715297185, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0025e3b60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c007e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c00850)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://2411392fee352fdef883b25717cf5e6437a57501078efc365408ba8ff7a0dfd6"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025e3ba0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025e3b80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:40:47.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2192" for this suite. Jan 22 13:41:09.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:41:09.679: INFO: namespace init-container-2192 deletion completed in 22.171431537s • [SLOW TEST:84.211 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:41:09.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 22 13:41:09.827: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 22 13:41:09.834: INFO: Waiting for terminating namespaces to be deleted... Jan 22 13:41:09.837: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 22 13:41:09.865: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 22 13:41:09.865: INFO: Container weave ready: true, restart count 0 Jan 22 13:41:09.865: INFO: Container weave-npc ready: true, restart count 0 Jan 22 13:41:09.865: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 22 13:41:09.866: INFO: Container kube-proxy ready: true, restart count 0 Jan 22 13:41:09.866: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 22 13:41:09.885: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 22 13:41:09.885: INFO: Container coredns ready: true, restart count 0 Jan 22 13:41:09.885: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 22 13:41:09.885: INFO: Container kube-scheduler ready: true, restart count 13 Jan 22 13:41:09.885: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 22 13:41:09.885: INFO: Container weave ready: true, restart count 0 Jan 22 13:41:09.885: INFO: Container weave-npc ready: true, restart count 0 Jan 22 13:41:09.885: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 22 13:41:09.885: INFO: Container coredns ready: true, restart count 0 Jan 22 13:41:09.885: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 22 13:41:09.885: INFO: Container etcd ready: true, restart count 0 Jan 22 13:41:09.885: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 22 13:41:09.885: INFO: Container kube-proxy ready: true, restart count 0 Jan 22 13:41:09.885: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 22 13:41:09.885: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 22 13:41:09.885: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 22 13:41:09.885: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Jan 22 13:41:09.984: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 22 13:41:09.984: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 22 13:41:09.984: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 22 13:41:09.984: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Jan 22 13:41:09.984: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Jan 22 13:41:09.984: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 22 13:41:09.984: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Jan 22 13:41:09.985: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 22 13:41:09.985: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Jan 22 13:41:09.985: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-66abceee-c619-4736-b663-ea43693fa7c1.15ec39353abb3bba], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8504/filler-pod-66abceee-c619-4736-b663-ea43693fa7c1 to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-66abceee-c619-4736-b663-ea43693fa7c1.15ec39367c8f5eab], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-66abceee-c619-4736-b663-ea43693fa7c1.15ec3937529bd437], Reason = [Created], Message = [Created container filler-pod-66abceee-c619-4736-b663-ea43693fa7c1] STEP: Considering event: Type = [Normal], Name = [filler-pod-66abceee-c619-4736-b663-ea43693fa7c1.15ec393776d2a832], Reason = [Started], Message = [Started container filler-pod-66abceee-c619-4736-b663-ea43693fa7c1] STEP: Considering event: Type = [Normal], Name = [filler-pod-92ce061d-5faa-4f0d-80f1-915eff1f7eaf.15ec39353b164fed], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8504/filler-pod-92ce061d-5faa-4f0d-80f1-915eff1f7eaf to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-92ce061d-5faa-4f0d-80f1-915eff1f7eaf.15ec39367f7e7032], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-92ce061d-5faa-4f0d-80f1-915eff1f7eaf.15ec393747faa299], Reason = [Created], Message = [Created container filler-pod-92ce061d-5faa-4f0d-80f1-915eff1f7eaf] STEP: Considering event: Type = [Normal], Name = [filler-pod-92ce061d-5faa-4f0d-80f1-915eff1f7eaf.15ec39376eb42809], Reason = [Started], Message = [Started container filler-pod-92ce061d-5faa-4f0d-80f1-915eff1f7eaf] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ec393807ccaba4], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:41:23.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8504" for this suite. Jan 22 13:41:31.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:41:31.359: INFO: namespace sched-pred-8504 deletion completed in 8.106030041s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:21.679 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:41:31.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-c7f78c04-ed84-49dd-b9cd-9831912f9927 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:41:45.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1144" for this suite. Jan 22 13:42:07.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:42:07.344: INFO: namespace configmap-1144 deletion completed in 22.170191733s • [SLOW TEST:35.984 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:42:07.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 13:42:07.493: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 22 13:42:07.551: INFO: Number of nodes with available pods: 0 Jan 22 13:42:07.552: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:08.577: INFO: Number of nodes with available pods: 0 Jan 22 13:42:08.577: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:09.720: INFO: Number of nodes with available pods: 0 Jan 22 13:42:09.721: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:10.582: INFO: Number of nodes with available pods: 0 Jan 22 13:42:10.582: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:11.575: INFO: Number of nodes with available pods: 0 Jan 22 13:42:11.575: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:13.186: INFO: Number of nodes with available pods: 0 Jan 22 13:42:13.186: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:14.052: INFO: Number of nodes with available pods: 0 Jan 22 13:42:14.052: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:14.578: INFO: Number of nodes with available pods: 0 Jan 22 13:42:14.578: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:15.686: INFO: Number of nodes with available pods: 0 Jan 22 13:42:15.686: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:16.569: INFO: Number of nodes with available pods: 0 Jan 22 13:42:16.569: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:17.573: INFO: Number of nodes with available pods: 1 Jan 22 13:42:17.574: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:18.577: INFO: Number of nodes with available pods: 2 Jan 22 13:42:18.577: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 22 13:42:18.658: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:18.658: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:20.013: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:20.014: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:20.693: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:20.693: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:21.702: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:21.702: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:22.703: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:22.703: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:23.699: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:23.699: INFO: Pod daemon-set-s8rzp is not available Jan 22 13:42:23.699: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:24.705: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:24.706: INFO: Pod daemon-set-s8rzp is not available Jan 22 13:42:24.706: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:25.698: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:25.699: INFO: Pod daemon-set-s8rzp is not available Jan 22 13:42:25.699: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:26.700: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:26.700: INFO: Pod daemon-set-s8rzp is not available Jan 22 13:42:26.700: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:27.699: INFO: Wrong image for pod: daemon-set-s8rzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:27.699: INFO: Pod daemon-set-s8rzp is not available Jan 22 13:42:27.699: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:29.023: INFO: Pod daemon-set-27qh6 is not available Jan 22 13:42:29.023: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:29.700: INFO: Pod daemon-set-27qh6 is not available Jan 22 13:42:29.700: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:30.701: INFO: Pod daemon-set-27qh6 is not available Jan 22 13:42:30.701: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:31.697: INFO: Pod daemon-set-27qh6 is not available Jan 22 13:42:31.697: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:32.696: INFO: Pod daemon-set-27qh6 is not available Jan 22 13:42:32.696: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:33.767: INFO: Pod daemon-set-27qh6 is not available Jan 22 13:42:33.767: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:34.701: INFO: Pod daemon-set-27qh6 is not available Jan 22 13:42:34.701: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:35.703: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:36.696: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:37.712: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:38.698: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:39.697: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:40.695: INFO: Wrong image for pod: daemon-set-wncpl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 13:42:40.695: INFO: Pod daemon-set-wncpl is not available Jan 22 13:42:41.696: INFO: Pod daemon-set-kp5xp is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 22 13:42:41.716: INFO: Number of nodes with available pods: 1 Jan 22 13:42:41.716: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:42.737: INFO: Number of nodes with available pods: 1 Jan 22 13:42:42.738: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:43.738: INFO: Number of nodes with available pods: 1 Jan 22 13:42:43.738: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:44.727: INFO: Number of nodes with available pods: 1 Jan 22 13:42:44.727: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:45.732: INFO: Number of nodes with available pods: 1 Jan 22 13:42:45.732: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:46.745: INFO: Number of nodes with available pods: 1 Jan 22 13:42:46.745: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:47.733: INFO: Number of nodes with available pods: 1 Jan 22 13:42:47.733: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:48.733: INFO: Number of nodes with available pods: 1 Jan 22 13:42:48.733: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:42:49.734: INFO: Number of nodes with available pods: 2 Jan 22 13:42:49.734: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8405, will wait for the garbage collector to delete the pods Jan 22 13:42:49.840: INFO: Deleting DaemonSet.extensions daemon-set took: 24.499468ms Jan 22 13:42:50.141: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.461868ms Jan 22 13:43:06.553: INFO: Number of nodes with available pods: 0 Jan 22 13:43:06.553: INFO: Number of running nodes: 0, number of available pods: 0 Jan 22 13:43:06.559: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8405/daemonsets","resourceVersion":"21435620"},"items":null} Jan 22 13:43:06.563: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8405/pods","resourceVersion":"21435620"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:43:06.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8405" for this suite. Jan 22 13:43:14.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:43:14.733: INFO: namespace daemonsets-8405 deletion completed in 8.130394489s • [SLOW TEST:67.389 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:43:14.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-dc563157-1b9f-4cea-b54b-cc8be1dd46a2 STEP: Creating a pod to test consume secrets Jan 22 13:43:14.895: INFO: Waiting up to 5m0s for pod "pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91" in namespace "secrets-7451" to be "success or failure" Jan 22 13:43:14.900: INFO: Pod "pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.874935ms Jan 22 13:43:16.907: INFO: Pod "pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011925865s Jan 22 13:43:18.916: INFO: Pod "pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020922563s Jan 22 13:43:20.927: INFO: Pod "pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03170206s Jan 22 13:43:22.936: INFO: Pod "pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041498266s STEP: Saw pod success Jan 22 13:43:22.936: INFO: Pod "pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91" satisfied condition "success or failure" Jan 22 13:43:22.940: INFO: Trying to get logs from node iruya-node pod pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91 container secret-volume-test: STEP: delete the pod Jan 22 13:43:22.980: INFO: Waiting for pod pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91 to disappear Jan 22 13:43:22.988: INFO: Pod pod-secrets-e25f8736-eb3b-4138-8d1e-dd12286c8f91 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:43:22.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7451" for this suite. Jan 22 13:43:29.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:43:29.227: INFO: namespace secrets-7451 deletion completed in 6.233931197s • [SLOW TEST:14.494 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:43:29.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jan 22 13:43:29.353: INFO: Waiting up to 5m0s for pod "client-containers-adda750e-176b-4186-8046-839ca2592b51" in namespace "containers-2116" to be "success or failure" Jan 22 13:43:29.375: INFO: Pod "client-containers-adda750e-176b-4186-8046-839ca2592b51": Phase="Pending", Reason="", readiness=false. Elapsed: 22.293191ms Jan 22 13:43:31.383: INFO: Pod "client-containers-adda750e-176b-4186-8046-839ca2592b51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030329354s Jan 22 13:43:33.392: INFO: Pod "client-containers-adda750e-176b-4186-8046-839ca2592b51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03899862s Jan 22 13:43:35.400: INFO: Pod "client-containers-adda750e-176b-4186-8046-839ca2592b51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047610665s Jan 22 13:43:37.409: INFO: Pod "client-containers-adda750e-176b-4186-8046-839ca2592b51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056232986s Jan 22 13:43:39.421: INFO: Pod "client-containers-adda750e-176b-4186-8046-839ca2592b51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068194512s STEP: Saw pod success Jan 22 13:43:39.421: INFO: Pod "client-containers-adda750e-176b-4186-8046-839ca2592b51" satisfied condition "success or failure" Jan 22 13:43:39.425: INFO: Trying to get logs from node iruya-node pod client-containers-adda750e-176b-4186-8046-839ca2592b51 container test-container: STEP: delete the pod Jan 22 13:43:39.490: INFO: Waiting for pod client-containers-adda750e-176b-4186-8046-839ca2592b51 to disappear Jan 22 13:43:39.518: INFO: Pod client-containers-adda750e-176b-4186-8046-839ca2592b51 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:43:39.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2116" for this suite. Jan 22 13:43:45.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:43:45.840: INFO: namespace containers-2116 deletion completed in 6.286618804s • [SLOW TEST:16.611 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:43:45.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:43:51.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-391" for this suite. Jan 22 13:43:57.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:43:57.776: INFO: namespace watch-391 deletion completed in 6.272296178s • [SLOW TEST:11.932 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:43:57.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 22 13:43:57.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7756' Jan 22 13:43:58.243: INFO: stderr: "" Jan 22 13:43:58.244: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 22 13:43:59.258: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:43:59.258: INFO: Found 0 / 1 Jan 22 13:44:00.258: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:44:00.258: INFO: Found 0 / 1 Jan 22 13:44:01.263: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:44:01.263: INFO: Found 0 / 1 Jan 22 13:44:02.251: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:44:02.251: INFO: Found 0 / 1 Jan 22 13:44:03.253: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:44:03.253: INFO: Found 0 / 1 Jan 22 13:44:04.257: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:44:04.257: INFO: Found 0 / 1 Jan 22 13:44:05.974: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:44:05.974: INFO: Found 0 / 1 Jan 22 13:44:06.253: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:44:06.253: INFO: Found 1 / 1 Jan 22 13:44:06.253: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 22 13:44:06.259: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:44:06.260: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 22 13:44:06.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-vrq6l --namespace=kubectl-7756 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 22 13:44:06.405: INFO: stderr: "" Jan 22 13:44:06.405: INFO: stdout: "pod/redis-master-vrq6l patched\n" STEP: checking annotations Jan 22 13:44:06.422: INFO: Selector matched 1 pods for map[app:redis] Jan 22 13:44:06.422: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:44:06.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7756" for this suite. Jan 22 13:44:28.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:44:28.601: INFO: namespace kubectl-7756 deletion completed in 22.171387546s • [SLOW TEST:30.825 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:44:28.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:44:28.699: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d" in namespace "downward-api-1797" to be "success or failure" Jan 22 13:44:28.771: INFO: Pod "downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d": Phase="Pending", Reason="", readiness=false. Elapsed: 71.670377ms Jan 22 13:44:30.779: INFO: Pod "downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079721031s Jan 22 13:44:32.804: INFO: Pod "downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104813682s Jan 22 13:44:34.826: INFO: Pod "downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127485742s Jan 22 13:44:37.229: INFO: Pod "downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530059953s Jan 22 13:44:39.240: INFO: Pod "downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.540895102s STEP: Saw pod success Jan 22 13:44:39.240: INFO: Pod "downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d" satisfied condition "success or failure" Jan 22 13:44:39.245: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d container client-container: STEP: delete the pod Jan 22 13:44:39.690: INFO: Waiting for pod downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d to disappear Jan 22 13:44:39.709: INFO: Pod downwardapi-volume-5985d58b-b948-46d9-a1e9-d69fc77bfb9d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:44:39.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1797" for this suite. Jan 22 13:44:45.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:44:46.098: INFO: namespace downward-api-1797 deletion completed in 6.380052725s • [SLOW TEST:17.496 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:44:46.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 22 13:44:46.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3199' Jan 22 13:44:46.454: INFO: stderr: "" Jan 22 13:44:46.454: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 13:44:46.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3199' Jan 22 13:44:46.643: INFO: stderr: "" Jan 22 13:44:46.643: INFO: stdout: "update-demo-nautilus-5cfwn update-demo-nautilus-wk5vz " Jan 22 13:44:46.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cfwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3199' Jan 22 13:44:46.807: INFO: stderr: "" Jan 22 13:44:46.807: INFO: stdout: "" Jan 22 13:44:46.807: INFO: update-demo-nautilus-5cfwn is created but not running Jan 22 13:44:51.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3199' Jan 22 13:44:52.796: INFO: stderr: "" Jan 22 13:44:52.796: INFO: stdout: "update-demo-nautilus-5cfwn update-demo-nautilus-wk5vz " Jan 22 13:44:52.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cfwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3199' Jan 22 13:44:53.011: INFO: stderr: "" Jan 22 13:44:53.011: INFO: stdout: "" Jan 22 13:44:53.011: INFO: update-demo-nautilus-5cfwn is created but not running Jan 22 13:44:58.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3199' Jan 22 13:44:58.146: INFO: stderr: "" Jan 22 13:44:58.147: INFO: stdout: "update-demo-nautilus-5cfwn update-demo-nautilus-wk5vz " Jan 22 13:44:58.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cfwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3199' Jan 22 13:44:58.249: INFO: stderr: "" Jan 22 13:44:58.249: INFO: stdout: "true" Jan 22 13:44:58.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cfwn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3199' Jan 22 13:44:58.348: INFO: stderr: "" Jan 22 13:44:58.349: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 13:44:58.349: INFO: validating pod update-demo-nautilus-5cfwn Jan 22 13:44:58.362: INFO: got data: { "image": "nautilus.jpg" } Jan 22 13:44:58.362: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 13:44:58.363: INFO: update-demo-nautilus-5cfwn is verified up and running Jan 22 13:44:58.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wk5vz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3199' Jan 22 13:44:58.448: INFO: stderr: "" Jan 22 13:44:58.448: INFO: stdout: "true" Jan 22 13:44:58.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wk5vz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3199' Jan 22 13:44:58.539: INFO: stderr: "" Jan 22 13:44:58.539: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 13:44:58.539: INFO: validating pod update-demo-nautilus-wk5vz Jan 22 13:44:58.613: INFO: got data: { "image": "nautilus.jpg" } Jan 22 13:44:58.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 13:44:58.613: INFO: update-demo-nautilus-wk5vz is verified up and running STEP: using delete to clean up resources Jan 22 13:44:58.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3199' Jan 22 13:44:58.695: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 13:44:58.695: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 22 13:44:58.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3199' Jan 22 13:44:58.786: INFO: stderr: "No resources found.\n" Jan 22 13:44:58.786: INFO: stdout: "" Jan 22 13:44:58.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3199 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 13:44:58.864: INFO: stderr: "" Jan 22 13:44:58.864: INFO: stdout: "update-demo-nautilus-5cfwn\nupdate-demo-nautilus-wk5vz\n" Jan 22 13:44:59.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3199' Jan 22 13:45:00.237: INFO: stderr: "No resources found.\n" Jan 22 13:45:00.237: INFO: stdout: "" Jan 22 13:45:00.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3199 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 13:45:00.345: INFO: stderr: "" Jan 22 13:45:00.345: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:45:00.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3199" for this suite. Jan 22 13:45:22.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:45:22.803: INFO: namespace kubectl-3199 deletion completed in 22.444902637s • [SLOW TEST:36.705 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:45:22.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 22 13:45:23.819: INFO: Pod name wrapped-volume-race-b1a8aa1a-4749-4d0e-963b-f3e22d83d5e5: Found 0 pods out of 5 Jan 22 13:45:28.890: INFO: Pod name wrapped-volume-race-b1a8aa1a-4749-4d0e-963b-f3e22d83d5e5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b1a8aa1a-4749-4d0e-963b-f3e22d83d5e5 in namespace emptydir-wrapper-8332, will wait for the garbage collector to delete the pods Jan 22 13:45:57.028: INFO: Deleting ReplicationController wrapped-volume-race-b1a8aa1a-4749-4d0e-963b-f3e22d83d5e5 took: 18.858682ms Jan 22 13:45:57.530: INFO: Terminating ReplicationController wrapped-volume-race-b1a8aa1a-4749-4d0e-963b-f3e22d83d5e5 pods took: 501.369572ms STEP: Creating RC which spawns configmap-volume pods Jan 22 13:46:42.903: INFO: Pod name wrapped-volume-race-f21195e7-b0b8-418b-bd33-2e6b6ab56e32: Found 0 pods out of 5 Jan 22 13:46:47.923: INFO: Pod name wrapped-volume-race-f21195e7-b0b8-418b-bd33-2e6b6ab56e32: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f21195e7-b0b8-418b-bd33-2e6b6ab56e32 in namespace emptydir-wrapper-8332, will wait for the garbage collector to delete the pods Jan 22 13:47:22.077: INFO: Deleting ReplicationController wrapped-volume-race-f21195e7-b0b8-418b-bd33-2e6b6ab56e32 took: 21.589122ms Jan 22 13:47:22.478: INFO: Terminating ReplicationController wrapped-volume-race-f21195e7-b0b8-418b-bd33-2e6b6ab56e32 pods took: 400.492943ms STEP: Creating RC which spawns configmap-volume pods Jan 22 13:48:07.129: INFO: Pod name wrapped-volume-race-59ad382c-3c78-44e1-a1e3-b7c7e18895dc: Found 0 pods out of 5 Jan 22 13:48:12.153: INFO: Pod name wrapped-volume-race-59ad382c-3c78-44e1-a1e3-b7c7e18895dc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-59ad382c-3c78-44e1-a1e3-b7c7e18895dc in namespace emptydir-wrapper-8332, will wait for the garbage collector to delete the pods Jan 22 13:48:44.332: INFO: Deleting ReplicationController wrapped-volume-race-59ad382c-3c78-44e1-a1e3-b7c7e18895dc took: 58.748627ms Jan 22 13:48:44.733: INFO: Terminating ReplicationController wrapped-volume-race-59ad382c-3c78-44e1-a1e3-b7c7e18895dc pods took: 400.603305ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:49:33.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8332" for this suite. Jan 22 13:49:43.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:49:43.654: INFO: namespace emptydir-wrapper-8332 deletion completed in 10.162367435s • [SLOW TEST:260.846 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:49:43.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:49:43.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623" in namespace "projected-1255" to be "success or failure" Jan 22 13:49:43.761: INFO: Pod "downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623": Phase="Pending", Reason="", readiness=false. Elapsed: 14.074351ms Jan 22 13:49:45.773: INFO: Pod "downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026198764s Jan 22 13:49:47.781: INFO: Pod "downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034488435s Jan 22 13:49:49.802: INFO: Pod "downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055474843s Jan 22 13:49:51.814: INFO: Pod "downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067836774s Jan 22 13:49:53.831: INFO: Pod "downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623": Phase="Pending", Reason="", readiness=false. Elapsed: 10.084159109s Jan 22 13:49:55.858: INFO: Pod "downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.111184655s STEP: Saw pod success Jan 22 13:49:55.858: INFO: Pod "downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623" satisfied condition "success or failure" Jan 22 13:49:55.874: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623 container client-container: STEP: delete the pod Jan 22 13:49:56.058: INFO: Waiting for pod downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623 to disappear Jan 22 13:49:56.069: INFO: Pod downwardapi-volume-6c0a8aa5-cefb-4d3d-ac95-6bb922a17623 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:49:56.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1255" for this suite. Jan 22 13:50:02.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:50:02.249: INFO: namespace projected-1255 deletion completed in 6.167468898s • [SLOW TEST:18.594 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:50:02.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-da95ce3d-6d74-4a27-83c8-39fd66814684 STEP: Creating a pod to test consume secrets Jan 22 13:50:02.380: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef" in namespace "projected-584" to be "success or failure" Jan 22 13:50:02.388: INFO: Pod "pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef": Phase="Pending", Reason="", readiness=false. Elapsed: 7.726985ms Jan 22 13:50:04.398: INFO: Pod "pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017034419s Jan 22 13:50:06.409: INFO: Pod "pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028508488s Jan 22 13:50:08.424: INFO: Pod "pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043286785s Jan 22 13:50:10.432: INFO: Pod "pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051236935s STEP: Saw pod success Jan 22 13:50:10.432: INFO: Pod "pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef" satisfied condition "success or failure" Jan 22 13:50:10.435: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef container projected-secret-volume-test: STEP: delete the pod Jan 22 13:50:10.508: INFO: Waiting for pod pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef to disappear Jan 22 13:50:10.522: INFO: Pod pod-projected-secrets-c68e5849-6bb6-45f5-a8dc-5372746545ef no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:50:10.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-584" for this suite. Jan 22 13:50:16.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:50:16.804: INFO: namespace projected-584 deletion completed in 6.271396546s • [SLOW TEST:14.555 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:50:16.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5842.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5842.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 22 13:50:31.031: INFO: File wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-65389ea7-a262-45a7-96c9-cb959d2d7558 contains '' instead of 'foo.example.com.' Jan 22 13:50:31.040: INFO: File jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-65389ea7-a262-45a7-96c9-cb959d2d7558 contains '' instead of 'foo.example.com.' Jan 22 13:50:31.040: INFO: Lookups using dns-5842/dns-test-65389ea7-a262-45a7-96c9-cb959d2d7558 failed for: [wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local] Jan 22 13:50:36.085: INFO: DNS probes using dns-test-65389ea7-a262-45a7-96c9-cb959d2d7558 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5842.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5842.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 22 13:50:51.333: INFO: File wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 contains '' instead of 'bar.example.com.' Jan 22 13:50:51.337: INFO: File jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 contains '' instead of 'bar.example.com.' Jan 22 13:50:51.337: INFO: Lookups using dns-5842/dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 failed for: [wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local] Jan 22 13:50:56.349: INFO: File wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 22 13:50:56.356: INFO: File jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 22 13:50:56.356: INFO: Lookups using dns-5842/dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 failed for: [wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local] Jan 22 13:51:01.350: INFO: File wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 22 13:51:01.365: INFO: File jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 22 13:51:01.365: INFO: Lookups using dns-5842/dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 failed for: [wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local] Jan 22 13:51:06.360: INFO: DNS probes using dns-test-c5a0ce3f-ca29-49bf-a0fb-b955294c0236 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5842.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5842.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 22 13:51:20.676: INFO: File wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-2e678079-f64d-4c98-9062-2931fb3bec72 contains '' instead of '10.105.24.55' Jan 22 13:51:20.687: INFO: File jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local from pod dns-5842/dns-test-2e678079-f64d-4c98-9062-2931fb3bec72 contains '' instead of '10.105.24.55' Jan 22 13:51:20.687: INFO: Lookups using dns-5842/dns-test-2e678079-f64d-4c98-9062-2931fb3bec72 failed for: [wheezy_udp@dns-test-service-3.dns-5842.svc.cluster.local jessie_udp@dns-test-service-3.dns-5842.svc.cluster.local] Jan 22 13:51:25.712: INFO: DNS probes using dns-test-2e678079-f64d-4c98-9062-2931fb3bec72 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:51:25.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5842" for this suite. Jan 22 13:51:34.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:51:34.256: INFO: namespace dns-5842 deletion completed in 8.298186373s • [SLOW TEST:77.449 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:51:34.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 22 13:51:34.505: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1262,SelfLink:/api/v1/namespaces/watch-1262/configmaps/e2e-watch-test-resource-version,UID:cb394378-7e14-4a96-8aad-6db120155c10,ResourceVersion:21437614,Generation:0,CreationTimestamp:2020-01-22 13:51:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 22 13:51:34.506: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1262,SelfLink:/api/v1/namespaces/watch-1262/configmaps/e2e-watch-test-resource-version,UID:cb394378-7e14-4a96-8aad-6db120155c10,ResourceVersion:21437615,Generation:0,CreationTimestamp:2020-01-22 13:51:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:51:34.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1262" for this suite. Jan 22 13:51:40.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:51:40.742: INFO: namespace watch-1262 deletion completed in 6.210262062s • [SLOW TEST:6.485 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:51:40.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 22 13:51:40.957: INFO: Number of nodes with available pods: 0 Jan 22 13:51:40.958: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:42.764: INFO: Number of nodes with available pods: 0 Jan 22 13:51:42.764: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:43.314: INFO: Number of nodes with available pods: 0 Jan 22 13:51:43.314: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:43.986: INFO: Number of nodes with available pods: 0 Jan 22 13:51:43.986: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:44.982: INFO: Number of nodes with available pods: 0 Jan 22 13:51:44.982: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:47.883: INFO: Number of nodes with available pods: 0 Jan 22 13:51:47.883: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:48.049: INFO: Number of nodes with available pods: 0 Jan 22 13:51:48.049: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:48.983: INFO: Number of nodes with available pods: 0 Jan 22 13:51:48.983: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:50.104: INFO: Number of nodes with available pods: 0 Jan 22 13:51:50.104: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:50.971: INFO: Number of nodes with available pods: 0 Jan 22 13:51:50.971: INFO: Node iruya-node is running more than one daemon pod Jan 22 13:51:52.011: INFO: Number of nodes with available pods: 2 Jan 22 13:51:52.011: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 22 13:51:52.107: INFO: Number of nodes with available pods: 1 Jan 22 13:51:52.107: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:51:53.126: INFO: Number of nodes with available pods: 1 Jan 22 13:51:53.126: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:51:54.365: INFO: Number of nodes with available pods: 1 Jan 22 13:51:54.365: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:51:55.127: INFO: Number of nodes with available pods: 1 Jan 22 13:51:55.127: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:51:56.177: INFO: Number of nodes with available pods: 1 Jan 22 13:51:56.177: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:51:57.127: INFO: Number of nodes with available pods: 1 Jan 22 13:51:57.127: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:51:58.175: INFO: Number of nodes with available pods: 1 Jan 22 13:51:58.175: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:51:59.132: INFO: Number of nodes with available pods: 1 Jan 22 13:51:59.132: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:52:00.677: INFO: Number of nodes with available pods: 1 Jan 22 13:52:00.677: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:52:01.122: INFO: Number of nodes with available pods: 1 Jan 22 13:52:01.122: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:52:02.146: INFO: Number of nodes with available pods: 1 Jan 22 13:52:02.146: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:52:03.139: INFO: Number of nodes with available pods: 1 Jan 22 13:52:03.139: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:52:04.200: INFO: Number of nodes with available pods: 1 Jan 22 13:52:04.200: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:52:05.125: INFO: Number of nodes with available pods: 1 Jan 22 13:52:05.125: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 22 13:52:06.163: INFO: Number of nodes with available pods: 2 Jan 22 13:52:06.163: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3805, will wait for the garbage collector to delete the pods Jan 22 13:52:06.233: INFO: Deleting DaemonSet.extensions daemon-set took: 9.988808ms Jan 22 13:52:06.533: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.456338ms Jan 22 13:52:18.140: INFO: Number of nodes with available pods: 0 Jan 22 13:52:18.140: INFO: Number of running nodes: 0, number of available pods: 0 Jan 22 13:52:18.143: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3805/daemonsets","resourceVersion":"21437731"},"items":null} Jan 22 13:52:18.148: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3805/pods","resourceVersion":"21437731"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:52:18.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3805" for this suite. Jan 22 13:52:24.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:52:24.409: INFO: namespace daemonsets-3805 deletion completed in 6.200278074s • [SLOW TEST:43.666 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:52:24.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jan 22 13:52:24.467: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:52:24.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3463" for this suite. Jan 22 13:52:30.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:52:30.754: INFO: namespace kubectl-3463 deletion completed in 6.191066082s • [SLOW TEST:6.345 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:52:30.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1693 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 22 13:52:30.842: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 22 13:53:07.082: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-1693 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:53:07.082: INFO: >>> kubeConfig: /root/.kube/config I0122 13:53:07.179739 9 log.go:172] (0xc000459ef0) (0xc000b4c500) Create stream I0122 13:53:07.179800 9 log.go:172] (0xc000459ef0) (0xc000b4c500) Stream added, broadcasting: 1 I0122 13:53:07.189703 9 log.go:172] (0xc000459ef0) Reply frame received for 1 I0122 13:53:07.189757 9 log.go:172] (0xc000459ef0) (0xc000b4c640) Create stream I0122 13:53:07.189770 9 log.go:172] (0xc000459ef0) (0xc000b4c640) Stream added, broadcasting: 3 I0122 13:53:07.192548 9 log.go:172] (0xc000459ef0) Reply frame received for 3 I0122 13:53:07.192613 9 log.go:172] (0xc000459ef0) (0xc000b4c780) Create stream I0122 13:53:07.192651 9 log.go:172] (0xc000459ef0) (0xc000b4c780) Stream added, broadcasting: 5 I0122 13:53:07.195565 9 log.go:172] (0xc000459ef0) Reply frame received for 5 I0122 13:53:07.475683 9 log.go:172] (0xc000459ef0) Data frame received for 3 I0122 13:53:07.475782 9 log.go:172] (0xc000b4c640) (3) Data frame handling I0122 13:53:07.476072 9 log.go:172] (0xc000b4c640) (3) Data frame sent I0122 13:53:07.664460 9 log.go:172] (0xc000459ef0) (0xc000b4c640) Stream removed, broadcasting: 3 I0122 13:53:07.664672 9 log.go:172] (0xc000459ef0) Data frame received for 1 I0122 13:53:07.664721 9 log.go:172] (0xc000b4c500) (1) Data frame handling I0122 13:53:07.664768 9 log.go:172] (0xc000b4c500) (1) Data frame sent I0122 13:53:07.665018 9 log.go:172] (0xc000459ef0) (0xc000b4c500) Stream removed, broadcasting: 1 I0122 13:53:07.665096 9 log.go:172] (0xc000459ef0) (0xc000b4c780) Stream removed, broadcasting: 5 I0122 13:53:07.665135 9 log.go:172] (0xc000459ef0) Go away received I0122 13:53:07.665262 9 log.go:172] (0xc000459ef0) (0xc000b4c500) Stream removed, broadcasting: 1 I0122 13:53:07.665281 9 log.go:172] (0xc000459ef0) (0xc000b4c640) Stream removed, broadcasting: 3 I0122 13:53:07.665293 9 log.go:172] (0xc000459ef0) (0xc000b4c780) Stream removed, broadcasting: 5 Jan 22 13:53:07.665: INFO: Waiting for endpoints: map[] Jan 22 13:53:07.675: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-1693 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 13:53:07.675: INFO: >>> kubeConfig: /root/.kube/config I0122 13:53:07.736877 9 log.go:172] (0xc00102c420) (0xc002724500) Create stream I0122 13:53:07.736962 9 log.go:172] (0xc00102c420) (0xc002724500) Stream added, broadcasting: 1 I0122 13:53:07.744051 9 log.go:172] (0xc00102c420) Reply frame received for 1 I0122 13:53:07.744088 9 log.go:172] (0xc00102c420) (0xc000236dc0) Create stream I0122 13:53:07.744106 9 log.go:172] (0xc00102c420) (0xc000236dc0) Stream added, broadcasting: 3 I0122 13:53:07.745924 9 log.go:172] (0xc00102c420) Reply frame received for 3 I0122 13:53:07.745945 9 log.go:172] (0xc00102c420) (0xc0027245a0) Create stream I0122 13:53:07.745951 9 log.go:172] (0xc00102c420) (0xc0027245a0) Stream added, broadcasting: 5 I0122 13:53:07.749230 9 log.go:172] (0xc00102c420) Reply frame received for 5 I0122 13:53:07.859229 9 log.go:172] (0xc00102c420) Data frame received for 3 I0122 13:53:07.859379 9 log.go:172] (0xc000236dc0) (3) Data frame handling I0122 13:53:07.859411 9 log.go:172] (0xc000236dc0) (3) Data frame sent I0122 13:53:07.998313 9 log.go:172] (0xc00102c420) (0xc000236dc0) Stream removed, broadcasting: 3 I0122 13:53:07.998384 9 log.go:172] (0xc00102c420) Data frame received for 1 I0122 13:53:07.998417 9 log.go:172] (0xc002724500) (1) Data frame handling I0122 13:53:07.998439 9 log.go:172] (0xc002724500) (1) Data frame sent I0122 13:53:07.998458 9 log.go:172] (0xc00102c420) (0xc002724500) Stream removed, broadcasting: 1 I0122 13:53:07.998484 9 log.go:172] (0xc00102c420) (0xc0027245a0) Stream removed, broadcasting: 5 I0122 13:53:07.998841 9 log.go:172] (0xc00102c420) Go away received I0122 13:53:07.998875 9 log.go:172] (0xc00102c420) (0xc002724500) Stream removed, broadcasting: 1 I0122 13:53:07.998894 9 log.go:172] (0xc00102c420) (0xc000236dc0) Stream removed, broadcasting: 3 I0122 13:53:07.998924 9 log.go:172] (0xc00102c420) (0xc0027245a0) Stream removed, broadcasting: 5 Jan 22 13:53:07.999: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:53:07.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1693" for this suite. Jan 22 13:53:20.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:53:20.143: INFO: namespace pod-network-test-1693 deletion completed in 12.134473753s • [SLOW TEST:49.389 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:53:20.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:53:20.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674" in namespace "downward-api-761" to be "success or failure" Jan 22 13:53:20.266: INFO: Pod "downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674": Phase="Pending", Reason="", readiness=false. Elapsed: 9.171847ms Jan 22 13:53:22.275: INFO: Pod "downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017614099s Jan 22 13:53:24.287: INFO: Pod "downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029956335s Jan 22 13:53:26.301: INFO: Pod "downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043398285s Jan 22 13:53:28.311: INFO: Pod "downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053709753s Jan 22 13:53:30.322: INFO: Pod "downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0652356s STEP: Saw pod success Jan 22 13:53:30.323: INFO: Pod "downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674" satisfied condition "success or failure" Jan 22 13:53:30.329: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674 container client-container: STEP: delete the pod Jan 22 13:53:30.458: INFO: Waiting for pod downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674 to disappear Jan 22 13:53:30.474: INFO: Pod downwardapi-volume-60150ac8-b3ad-4032-8c8a-94f64566e674 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:53:30.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-761" for this suite. Jan 22 13:53:36.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:53:36.662: INFO: namespace downward-api-761 deletion completed in 6.175207718s • [SLOW TEST:16.517 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:53:36.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1ac4c6ed-18a5-4b38-9b21-376adb520006 STEP: Creating a pod to test consume configMaps Jan 22 13:53:36.772: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea" in namespace "projected-6928" to be "success or failure" Jan 22 13:53:36.779: INFO: Pod "pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea": Phase="Pending", Reason="", readiness=false. Elapsed: 7.045937ms Jan 22 13:53:38.795: INFO: Pod "pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023222843s Jan 22 13:53:40.804: INFO: Pod "pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031845078s Jan 22 13:53:42.812: INFO: Pod "pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040265251s Jan 22 13:53:44.825: INFO: Pod "pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea": Phase="Running", Reason="", readiness=true. Elapsed: 8.053368546s Jan 22 13:53:46.834: INFO: Pod "pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061793507s STEP: Saw pod success Jan 22 13:53:46.834: INFO: Pod "pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea" satisfied condition "success or failure" Jan 22 13:53:46.838: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea container projected-configmap-volume-test: STEP: delete the pod Jan 22 13:53:47.449: INFO: Waiting for pod pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea to disappear Jan 22 13:53:47.471: INFO: Pod pod-projected-configmaps-bbec6ee1-12ba-492f-bbad-431eb81ddeea no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:53:47.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6928" for this suite. Jan 22 13:53:53.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:53:53.729: INFO: namespace projected-6928 deletion completed in 6.243967863s • [SLOW TEST:17.066 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:53:53.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 22 13:53:53.839: INFO: Waiting up to 5m0s for pod "pod-481faf11-d6fd-4991-9c2e-e740f3c63809" in namespace "emptydir-6504" to be "success or failure" Jan 22 13:53:53.854: INFO: Pod "pod-481faf11-d6fd-4991-9c2e-e740f3c63809": Phase="Pending", Reason="", readiness=false. Elapsed: 14.661981ms Jan 22 13:53:55.876: INFO: Pod "pod-481faf11-d6fd-4991-9c2e-e740f3c63809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036538963s Jan 22 13:53:57.922: INFO: Pod "pod-481faf11-d6fd-4991-9c2e-e740f3c63809": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082619072s Jan 22 13:53:59.933: INFO: Pod "pod-481faf11-d6fd-4991-9c2e-e740f3c63809": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093923487s Jan 22 13:54:01.946: INFO: Pod "pod-481faf11-d6fd-4991-9c2e-e740f3c63809": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105949421s Jan 22 13:54:03.957: INFO: Pod "pod-481faf11-d6fd-4991-9c2e-e740f3c63809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117145824s STEP: Saw pod success Jan 22 13:54:03.957: INFO: Pod "pod-481faf11-d6fd-4991-9c2e-e740f3c63809" satisfied condition "success or failure" Jan 22 13:54:03.962: INFO: Trying to get logs from node iruya-node pod pod-481faf11-d6fd-4991-9c2e-e740f3c63809 container test-container: STEP: delete the pod Jan 22 13:54:04.032: INFO: Waiting for pod pod-481faf11-d6fd-4991-9c2e-e740f3c63809 to disappear Jan 22 13:54:04.090: INFO: Pod pod-481faf11-d6fd-4991-9c2e-e740f3c63809 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:54:04.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6504" for this suite. Jan 22 13:54:10.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:54:10.281: INFO: namespace emptydir-6504 deletion completed in 6.185064574s • [SLOW TEST:16.552 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:54:10.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-19370d85-421d-44d0-870d-62c3b88181bc STEP: Creating a pod to test consume secrets Jan 22 13:54:10.400: INFO: Waiting up to 5m0s for pod "pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5" in namespace "secrets-1725" to be "success or failure" Jan 22 13:54:10.410: INFO: Pod "pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.425092ms Jan 22 13:54:12.418: INFO: Pod "pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017823286s Jan 22 13:54:14.425: INFO: Pod "pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024072082s Jan 22 13:54:16.432: INFO: Pod "pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031658155s Jan 22 13:54:18.452: INFO: Pod "pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051813694s STEP: Saw pod success Jan 22 13:54:18.452: INFO: Pod "pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5" satisfied condition "success or failure" Jan 22 13:54:18.472: INFO: Trying to get logs from node iruya-node pod pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5 container secret-volume-test: STEP: delete the pod Jan 22 13:54:19.166: INFO: Waiting for pod pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5 to disappear Jan 22 13:54:19.186: INFO: Pod pod-secrets-7f8b866d-9f54-4e9e-8d06-502991fb20a5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:54:19.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1725" for this suite. Jan 22 13:54:25.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:54:25.384: INFO: namespace secrets-1725 deletion completed in 6.190671769s • [SLOW TEST:15.101 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:54:25.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-cd6523d2-5abd-4dbd-826e-fb2ba9cba033 STEP: Creating a pod to test consume configMaps Jan 22 13:54:25.497: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93" in namespace "projected-7740" to be "success or failure" Jan 22 13:54:25.521: INFO: Pod "pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93": Phase="Pending", Reason="", readiness=false. Elapsed: 23.882044ms Jan 22 13:54:27.530: INFO: Pod "pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033381837s Jan 22 13:54:29.539: INFO: Pod "pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042081064s Jan 22 13:54:31.548: INFO: Pod "pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051313627s Jan 22 13:54:33.558: INFO: Pod "pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060745296s STEP: Saw pod success Jan 22 13:54:33.558: INFO: Pod "pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93" satisfied condition "success or failure" Jan 22 13:54:33.563: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93 container projected-configmap-volume-test: STEP: delete the pod Jan 22 13:54:33.646: INFO: Waiting for pod pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93 to disappear Jan 22 13:54:33.654: INFO: Pod pod-projected-configmaps-ac953b36-9f60-432c-bb12-604d2f625b93 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:54:33.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7740" for this suite. Jan 22 13:54:39.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:54:39.854: INFO: namespace projected-7740 deletion completed in 6.192284512s • [SLOW TEST:14.470 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:54:39.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-4125b073-81fe-4fe3-9a4a-90a7dd018a85 STEP: Creating a pod to test consume secrets Jan 22 13:54:40.095: INFO: Waiting up to 5m0s for pod "pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e" in namespace "secrets-9541" to be "success or failure" Jan 22 13:54:40.121: INFO: Pod "pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.616559ms Jan 22 13:54:42.144: INFO: Pod "pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048719221s Jan 22 13:54:44.151: INFO: Pod "pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056635962s Jan 22 13:54:46.158: INFO: Pod "pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063515593s Jan 22 13:54:48.215: INFO: Pod "pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120304045s STEP: Saw pod success Jan 22 13:54:48.215: INFO: Pod "pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e" satisfied condition "success or failure" Jan 22 13:54:48.235: INFO: Trying to get logs from node iruya-node pod pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e container secret-volume-test: STEP: delete the pod Jan 22 13:54:48.319: INFO: Waiting for pod pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e to disappear Jan 22 13:54:48.403: INFO: Pod pod-secrets-a7500e5e-d321-4a1f-b90e-5f17f7e2c74e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:54:48.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9541" for this suite. Jan 22 13:54:54.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:54:54.668: INFO: namespace secrets-9541 deletion completed in 6.255061686s • [SLOW TEST:14.812 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:54:54.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:54:54.773: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81" in namespace "projected-2059" to be "success or failure" Jan 22 13:54:55.432: INFO: Pod "downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81": Phase="Pending", Reason="", readiness=false. Elapsed: 658.827216ms Jan 22 13:54:57.443: INFO: Pod "downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670111124s Jan 22 13:54:59.465: INFO: Pod "downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.692360741s Jan 22 13:55:01.473: INFO: Pod "downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.700064769s Jan 22 13:55:03.486: INFO: Pod "downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.712835119s Jan 22 13:55:05.495: INFO: Pod "downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.722726015s STEP: Saw pod success Jan 22 13:55:05.496: INFO: Pod "downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81" satisfied condition "success or failure" Jan 22 13:55:05.504: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81 container client-container: STEP: delete the pod Jan 22 13:55:05.663: INFO: Waiting for pod downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81 to disappear Jan 22 13:55:05.677: INFO: Pod downwardapi-volume-c63426df-dad3-476d-bef4-554a0462fb81 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:55:05.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2059" for this suite. Jan 22 13:55:11.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:55:11.861: INFO: namespace projected-2059 deletion completed in 6.174539361s • [SLOW TEST:17.192 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:55:11.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 22 13:55:12.012: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:55:36.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-428" for this suite. Jan 22 13:55:42.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:55:42.820: INFO: namespace pods-428 deletion completed in 6.220389141s • [SLOW TEST:30.959 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:55:42.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0122 13:55:45.988990 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 22 13:55:45.989: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:55:45.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7540" for this suite. Jan 22 13:55:52.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:55:52.421: INFO: namespace gc-7540 deletion completed in 6.428276943s • [SLOW TEST:9.598 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:55:52.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-5ad0cded-586a-44ce-adfe-22122f76ad55 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:55:52.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1962" for this suite. Jan 22 13:55:58.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:55:58.622: INFO: namespace configmap-1962 deletion completed in 6.127115672s • [SLOW TEST:6.201 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:55:58.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:55:58.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3692" for this suite. Jan 22 13:56:20.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:56:20.902: INFO: namespace pods-3692 deletion completed in 22.144082535s • [SLOW TEST:22.280 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:56:20.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 13:56:21.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6" in namespace "projected-2789" to be "success or failure" Jan 22 13:56:21.057: INFO: Pod "downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131942ms Jan 22 13:56:23.069: INFO: Pod "downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018152693s Jan 22 13:56:25.078: INFO: Pod "downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026813881s Jan 22 13:56:27.097: INFO: Pod "downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046476451s Jan 22 13:56:29.106: INFO: Pod "downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055288528s Jan 22 13:56:31.115: INFO: Pod "downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06391028s STEP: Saw pod success Jan 22 13:56:31.115: INFO: Pod "downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6" satisfied condition "success or failure" Jan 22 13:56:31.120: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6 container client-container: STEP: delete the pod Jan 22 13:56:31.198: INFO: Waiting for pod downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6 to disappear Jan 22 13:56:31.228: INFO: Pod downwardapi-volume-5365a965-bb4d-4f7b-8dfb-cbf76bb07bb6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:56:31.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2789" for this suite. Jan 22 13:56:37.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:56:37.461: INFO: namespace projected-2789 deletion completed in 6.227298869s • [SLOW TEST:16.559 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:56:37.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-3956ceed-5a39-4299-8f02-72e1d5caa1ef in namespace container-probe-9132 Jan 22 13:56:45.619: INFO: Started pod busybox-3956ceed-5a39-4299-8f02-72e1d5caa1ef in namespace container-probe-9132 STEP: checking the pod's current state and verifying that restartCount is present Jan 22 13:56:45.625: INFO: Initial restart count of pod busybox-3956ceed-5a39-4299-8f02-72e1d5caa1ef is 0 Jan 22 13:57:39.927: INFO: Restart count of pod container-probe-9132/busybox-3956ceed-5a39-4299-8f02-72e1d5caa1ef is now 1 (54.302330012s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:57:40.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9132" for this suite. Jan 22 13:57:46.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:57:46.182: INFO: namespace container-probe-9132 deletion completed in 6.147146865s • [SLOW TEST:68.721 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:57:46.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0122 13:58:27.930680 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 22 13:58:27.930: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:58:27.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6533" for this suite. Jan 22 13:58:45.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:58:46.073: INFO: namespace gc-6533 deletion completed in 18.135928717s • [SLOW TEST:59.891 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:58:46.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1364.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1364.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1364.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1364.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1364.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1364.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1364.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 94.209.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.209.94_udp@PTR;check="$$(dig +tcp +noall +answer +search 94.209.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.209.94_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1364.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1364.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1364.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1364.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1364.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1364.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1364.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 94.209.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.209.94_udp@PTR;check="$$(dig +tcp +noall +answer +search 94.209.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.209.94_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 22 13:58:58.639: INFO: Unable to read wheezy_udp@dns-test-service.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.649: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.656: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.663: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.667: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.671: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.675: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.680: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.684: INFO: Unable to read 10.111.209.94_udp@PTR from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.688: INFO: Unable to read 10.111.209.94_tcp@PTR from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.692: INFO: Unable to read jessie_udp@dns-test-service.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.696: INFO: Unable to read jessie_tcp@dns-test-service.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.700: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.703: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.707: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.710: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-1364.svc.cluster.local from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.714: INFO: Unable to read jessie_udp@PodARecord from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.718: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.722: INFO: Unable to read 10.111.209.94_udp@PTR from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.725: INFO: Unable to read 10.111.209.94_tcp@PTR from pod dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69: the server could not find the requested resource (get pods dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69) Jan 22 13:58:58.725: INFO: Lookups using dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69 failed for: [wheezy_udp@dns-test-service.dns-1364.svc.cluster.local wheezy_tcp@dns-test-service.dns-1364.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-1364.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-1364.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.111.209.94_udp@PTR 10.111.209.94_tcp@PTR jessie_udp@dns-test-service.dns-1364.svc.cluster.local jessie_tcp@dns-test-service.dns-1364.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1364.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-1364.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-1364.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.111.209.94_udp@PTR 10.111.209.94_tcp@PTR] Jan 22 13:59:03.964: INFO: DNS probes using dns-1364/dns-test-5ac370fd-e897-4ddf-a02c-93cbf9b56a69 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:59:04.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1364" for this suite. Jan 22 13:59:10.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 13:59:10.743: INFO: namespace dns-1364 deletion completed in 6.300150043s • [SLOW TEST:24.670 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 13:59:10.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 22 13:59:27.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:27.197: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:29.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:29.210: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:31.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:31.206: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:33.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:33.207: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:35.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:35.207: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:37.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:37.206: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:39.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:39.206: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:41.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:41.206: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:43.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:43.209: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:45.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:45.208: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:47.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:47.491: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:49.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:49.204: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:51.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:51.208: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:53.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:53.215: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:55.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:55.209: INFO: Pod pod-with-prestop-exec-hook still exists Jan 22 13:59:57.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 22 13:59:57.205: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 13:59:57.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3683" for this suite. Jan 22 14:00:21.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:00:21.374: INFO: namespace container-lifecycle-hook-3683 deletion completed in 24.137945754s • [SLOW TEST:70.630 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:00:21.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 14:00:31.635: INFO: Waiting up to 5m0s for pod "client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e" in namespace "pods-3008" to be "success or failure" Jan 22 14:00:31.653: INFO: Pod "client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.00999ms Jan 22 14:00:33.668: INFO: Pod "client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032871161s Jan 22 14:00:35.680: INFO: Pod "client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044716724s Jan 22 14:00:37.688: INFO: Pod "client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053315175s Jan 22 14:00:39.697: INFO: Pod "client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061737502s Jan 22 14:00:41.708: INFO: Pod "client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073150006s STEP: Saw pod success Jan 22 14:00:41.708: INFO: Pod "client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e" satisfied condition "success or failure" Jan 22 14:00:41.715: INFO: Trying to get logs from node iruya-node pod client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e container env3cont: STEP: delete the pod Jan 22 14:00:41.779: INFO: Waiting for pod client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e to disappear Jan 22 14:00:41.785: INFO: Pod client-envvars-e40bdd3e-47e0-45a9-842f-b02dd9c0364e no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:00:41.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3008" for this suite. Jan 22 14:01:27.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:01:28.000: INFO: namespace pods-3008 deletion completed in 46.202768345s • [SLOW TEST:66.625 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:01:28.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 22 14:01:28.054: INFO: Waiting up to 5m0s for pod "pod-f2913f85-c7e4-4678-9157-901da0fe6d09" in namespace "emptydir-966" to be "success or failure" Jan 22 14:01:28.062: INFO: Pod "pod-f2913f85-c7e4-4678-9157-901da0fe6d09": Phase="Pending", Reason="", readiness=false. Elapsed: 7.778683ms Jan 22 14:01:30.069: INFO: Pod "pod-f2913f85-c7e4-4678-9157-901da0fe6d09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014653296s Jan 22 14:01:32.083: INFO: Pod "pod-f2913f85-c7e4-4678-9157-901da0fe6d09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028219507s Jan 22 14:01:34.091: INFO: Pod "pod-f2913f85-c7e4-4678-9157-901da0fe6d09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036161479s Jan 22 14:01:36.155: INFO: Pod "pod-f2913f85-c7e4-4678-9157-901da0fe6d09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100923703s STEP: Saw pod success Jan 22 14:01:36.155: INFO: Pod "pod-f2913f85-c7e4-4678-9157-901da0fe6d09" satisfied condition "success or failure" Jan 22 14:01:36.162: INFO: Trying to get logs from node iruya-node pod pod-f2913f85-c7e4-4678-9157-901da0fe6d09 container test-container: STEP: delete the pod Jan 22 14:01:36.215: INFO: Waiting for pod pod-f2913f85-c7e4-4678-9157-901da0fe6d09 to disappear Jan 22 14:01:36.246: INFO: Pod pod-f2913f85-c7e4-4678-9157-901da0fe6d09 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:01:36.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-966" for this suite. Jan 22 14:01:42.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:01:42.440: INFO: namespace emptydir-966 deletion completed in 6.186973813s • [SLOW TEST:14.440 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:01:42.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-8381/secret-test-3e36bc90-3768-4ca1-9286-6849f52f3888 STEP: Creating a pod to test consume secrets Jan 22 14:01:42.672: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397" in namespace "secrets-8381" to be "success or failure" Jan 22 14:01:42.700: INFO: Pod "pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397": Phase="Pending", Reason="", readiness=false. Elapsed: 27.845213ms Jan 22 14:01:44.714: INFO: Pod "pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042041295s Jan 22 14:01:46.770: INFO: Pod "pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0977447s Jan 22 14:01:48.811: INFO: Pod "pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139534324s Jan 22 14:01:50.879: INFO: Pod "pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.20707656s STEP: Saw pod success Jan 22 14:01:50.879: INFO: Pod "pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397" satisfied condition "success or failure" Jan 22 14:01:50.884: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397 container env-test: STEP: delete the pod Jan 22 14:01:51.123: INFO: Waiting for pod pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397 to disappear Jan 22 14:01:51.134: INFO: Pod pod-configmaps-7ef38957-846d-4f6c-9ab4-69bd9947a397 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:01:51.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8381" for this suite. Jan 22 14:01:57.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:01:57.287: INFO: namespace secrets-8381 deletion completed in 6.14725216s • [SLOW TEST:14.847 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:01:57.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 22 14:01:57.391: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jan 22 14:01:57.886: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 22 14:02:00.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 14:02:02.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 14:02:04.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 14:02:06.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715298517, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 14:02:13.738: INFO: Waited 5.485203752s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:02:14.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9093" for this suite. Jan 22 14:02:20.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:02:20.958: INFO: namespace aggregator-9093 deletion completed in 6.266686644s • [SLOW TEST:23.670 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:02:20.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jan 22 14:02:29.122: INFO: Pod pod-hostip-d2295490-dc8d-4ea8-9edd-a0e03be5dc10 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:02:29.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9425" for this suite. Jan 22 14:02:51.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:02:51.258: INFO: namespace pods-9425 deletion completed in 22.132161253s • [SLOW TEST:30.300 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:02:51.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jan 22 14:02:51.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6070' Jan 22 14:02:53.849: INFO: stderr: "" Jan 22 14:02:53.849: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jan 22 14:02:54.868: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:02:54.868: INFO: Found 0 / 1 Jan 22 14:02:55.861: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:02:55.861: INFO: Found 0 / 1 Jan 22 14:02:56.876: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:02:56.876: INFO: Found 0 / 1 Jan 22 14:02:57.864: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:02:57.864: INFO: Found 0 / 1 Jan 22 14:02:58.859: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:02:58.859: INFO: Found 0 / 1 Jan 22 14:02:59.913: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:02:59.913: INFO: Found 0 / 1 Jan 22 14:03:00.860: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:03:00.860: INFO: Found 1 / 1 Jan 22 14:03:00.860: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 22 14:03:00.868: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:03:00.869: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 22 14:03:00.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rdp5w redis-master --namespace=kubectl-6070' Jan 22 14:03:01.031: INFO: stderr: "" Jan 22 14:03:01.031: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jan 14:03:00.270 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jan 14:03:00.270 # Server started, Redis version 3.2.12\n1:M 22 Jan 14:03:00.270 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jan 14:03:00.270 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 22 14:03:01.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rdp5w redis-master --namespace=kubectl-6070 --tail=1' Jan 22 14:03:01.160: INFO: stderr: "" Jan 22 14:03:01.160: INFO: stdout: "1:M 22 Jan 14:03:00.270 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 22 14:03:01.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rdp5w redis-master --namespace=kubectl-6070 --limit-bytes=1' Jan 22 14:03:01.262: INFO: stderr: "" Jan 22 14:03:01.262: INFO: stdout: " " STEP: exposing timestamps Jan 22 14:03:01.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rdp5w redis-master --namespace=kubectl-6070 --tail=1 --timestamps' Jan 22 14:03:01.360: INFO: stderr: "" Jan 22 14:03:01.360: INFO: stdout: "2020-01-22T14:03:00.271068824Z 1:M 22 Jan 14:03:00.270 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 22 14:03:03.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rdp5w redis-master --namespace=kubectl-6070 --since=1s' Jan 22 14:03:04.059: INFO: stderr: "" Jan 22 14:03:04.059: INFO: stdout: "" Jan 22 14:03:04.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rdp5w redis-master --namespace=kubectl-6070 --since=24h' Jan 22 14:03:04.175: INFO: stderr: "" Jan 22 14:03:04.175: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jan 14:03:00.270 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jan 14:03:00.270 # Server started, Redis version 3.2.12\n1:M 22 Jan 14:03:00.270 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jan 14:03:00.270 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jan 22 14:03:04.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6070' Jan 22 14:03:04.279: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 14:03:04.279: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 22 14:03:04.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-6070' Jan 22 14:03:04.367: INFO: stderr: "No resources found.\n" Jan 22 14:03:04.367: INFO: stdout: "" Jan 22 14:03:04.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-6070 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 14:03:04.457: INFO: stderr: "" Jan 22 14:03:04.457: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:03:04.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6070" for this suite. Jan 22 14:03:20.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:03:20.631: INFO: namespace kubectl-6070 deletion completed in 16.168728776s • [SLOW TEST:29.373 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:03:20.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 22 14:03:20.740: INFO: Waiting up to 5m0s for pod "pod-d775c96c-6a8e-4257-86c3-55c853b97403" in namespace "emptydir-3780" to be "success or failure" Jan 22 14:03:20.851: INFO: Pod "pod-d775c96c-6a8e-4257-86c3-55c853b97403": Phase="Pending", Reason="", readiness=false. Elapsed: 110.609166ms Jan 22 14:03:22.868: INFO: Pod "pod-d775c96c-6a8e-4257-86c3-55c853b97403": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128143308s Jan 22 14:03:24.887: INFO: Pod "pod-d775c96c-6a8e-4257-86c3-55c853b97403": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146726413s Jan 22 14:03:26.897: INFO: Pod "pod-d775c96c-6a8e-4257-86c3-55c853b97403": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15659462s Jan 22 14:03:28.907: INFO: Pod "pod-d775c96c-6a8e-4257-86c3-55c853b97403": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166608867s Jan 22 14:03:31.139: INFO: Pod "pod-d775c96c-6a8e-4257-86c3-55c853b97403": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.398868624s STEP: Saw pod success Jan 22 14:03:31.139: INFO: Pod "pod-d775c96c-6a8e-4257-86c3-55c853b97403" satisfied condition "success or failure" Jan 22 14:03:31.143: INFO: Trying to get logs from node iruya-node pod pod-d775c96c-6a8e-4257-86c3-55c853b97403 container test-container: STEP: delete the pod Jan 22 14:03:31.642: INFO: Waiting for pod pod-d775c96c-6a8e-4257-86c3-55c853b97403 to disappear Jan 22 14:03:31.721: INFO: Pod pod-d775c96c-6a8e-4257-86c3-55c853b97403 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:03:31.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3780" for this suite. Jan 22 14:03:37.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:03:37.940: INFO: namespace emptydir-3780 deletion completed in 6.209144052s • [SLOW TEST:17.308 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:03:37.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-484, will wait for the garbage collector to delete the pods Jan 22 14:03:50.151: INFO: Deleting Job.batch foo took: 14.361068ms Jan 22 14:03:50.451: INFO: Terminating Job.batch foo pods took: 300.438448ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:04:36.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-484" for this suite. Jan 22 14:04:42.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:04:42.856: INFO: namespace job-484 deletion completed in 6.18734619s • [SLOW TEST:64.915 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:04:42.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-a8111be1-5c13-4b58-92d8-57fbc563f419 STEP: Creating a pod to test consume configMaps Jan 22 14:04:44.103: INFO: Waiting up to 5m0s for pod "pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903" in namespace "configmap-5000" to be "success or failure" Jan 22 14:04:44.173: INFO: Pod "pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903": Phase="Pending", Reason="", readiness=false. Elapsed: 69.487914ms Jan 22 14:04:46.182: INFO: Pod "pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078612895s Jan 22 14:04:48.190: INFO: Pod "pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086521313s Jan 22 14:04:50.196: INFO: Pod "pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093058824s Jan 22 14:04:52.206: INFO: Pod "pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102814508s Jan 22 14:04:54.222: INFO: Pod "pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118853939s STEP: Saw pod success Jan 22 14:04:54.222: INFO: Pod "pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903" satisfied condition "success or failure" Jan 22 14:04:54.225: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903 container configmap-volume-test: STEP: delete the pod Jan 22 14:04:54.262: INFO: Waiting for pod pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903 to disappear Jan 22 14:04:54.285: INFO: Pod pod-configmaps-c80c0933-fba3-4746-9150-14c8c7041903 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:04:54.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5000" for this suite. Jan 22 14:05:00.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:05:00.526: INFO: namespace configmap-5000 deletion completed in 6.236388274s • [SLOW TEST:17.670 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:05:00.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jan 22 14:05:00.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 22 14:05:00.777: INFO: stderr: "" Jan 22 14:05:00.777: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:05:00.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3169" for this suite. Jan 22 14:05:06.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:05:06.892: INFO: namespace kubectl-3169 deletion completed in 6.109854275s • [SLOW TEST:6.365 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:05:06.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-7d6x STEP: Creating a pod to test atomic-volume-subpath Jan 22 14:05:07.051: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7d6x" in namespace "subpath-3399" to be "success or failure" Jan 22 14:05:07.064: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.056528ms Jan 22 14:05:09.143: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091833071s Jan 22 14:05:11.166: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114114062s Jan 22 14:05:13.174: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122139857s Jan 22 14:05:15.931: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.879839056s Jan 22 14:05:17.939: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 10.887793754s Jan 22 14:05:19.959: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 12.907354988s Jan 22 14:05:21.968: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 14.91637466s Jan 22 14:05:23.981: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 16.929120821s Jan 22 14:05:25.988: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 18.936094621s Jan 22 14:05:27.997: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 20.945474556s Jan 22 14:05:30.013: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 22.961703666s Jan 22 14:05:32.022: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 24.970058954s Jan 22 14:05:34.029: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 26.977212323s Jan 22 14:05:36.035: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Running", Reason="", readiness=true. Elapsed: 28.983995933s Jan 22 14:05:38.043: INFO: Pod "pod-subpath-test-downwardapi-7d6x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.991865685s STEP: Saw pod success Jan 22 14:05:38.043: INFO: Pod "pod-subpath-test-downwardapi-7d6x" satisfied condition "success or failure" Jan 22 14:05:38.047: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-7d6x container test-container-subpath-downwardapi-7d6x: STEP: delete the pod Jan 22 14:05:38.131: INFO: Waiting for pod pod-subpath-test-downwardapi-7d6x to disappear Jan 22 14:05:38.142: INFO: Pod pod-subpath-test-downwardapi-7d6x no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7d6x Jan 22 14:05:38.142: INFO: Deleting pod "pod-subpath-test-downwardapi-7d6x" in namespace "subpath-3399" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:05:38.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3399" for this suite. Jan 22 14:05:44.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:05:44.396: INFO: namespace subpath-3399 deletion completed in 6.24381002s • [SLOW TEST:37.504 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:05:44.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-qbh9b in namespace proxy-6894 I0122 14:05:44.590517 9 runners.go:180] Created replication controller with name: proxy-service-qbh9b, namespace: proxy-6894, replica count: 1 I0122 14:05:45.641286 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:05:46.641545 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:05:47.641938 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:05:48.642352 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:05:49.642779 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:05:50.643119 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:05:51.643392 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:05:52.643681 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 14:05:53.644126 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 14:05:54.644373 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 14:05:55.644708 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 14:05:56.644976 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 14:05:57.645316 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 14:05:58.645614 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 14:05:59.646007 9 runners.go:180] proxy-service-qbh9b Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 22 14:05:59.686: INFO: Endpoint proxy-6894/proxy-service-qbh9b is not ready yet Jan 22 14:06:01.698: INFO: setup took 17.166271304s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 22 14:06:01.765: INFO: (0) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 66.940032ms) Jan 22 14:06:01.766: INFO: (0) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname2/proxy/: bar (200; 67.415307ms) Jan 22 14:06:01.766: INFO: (0) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 67.892015ms) Jan 22 14:06:01.767: INFO: (0) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 68.596627ms) Jan 22 14:06:01.768: INFO: (0) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname1/proxy/: foo (200; 69.770264ms) Jan 22 14:06:01.769: INFO: (0) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 70.474893ms) Jan 22 14:06:01.770: INFO: (0) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 71.152534ms) Jan 22 14:06:01.770: INFO: (0) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 71.047465ms) Jan 22 14:06:01.770: INFO: (0) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 71.533302ms) Jan 22 14:06:01.779: INFO: (0) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 80.291575ms) Jan 22 14:06:01.779: INFO: (0) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 80.3781ms) Jan 22 14:06:01.798: INFO: (0) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 99.576952ms) Jan 22 14:06:01.798: INFO: (0) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 100.073048ms) Jan 22 14:06:01.799: INFO: (0) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: ... (200; 15.81091ms) Jan 22 14:06:01.819: INFO: (1) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 16.011005ms) Jan 22 14:06:01.819: INFO: (1) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test<... (200; 16.322412ms) Jan 22 14:06:01.829: INFO: (1) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 25.37904ms) Jan 22 14:06:01.829: INFO: (1) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname1/proxy/: foo (200; 25.489049ms) Jan 22 14:06:01.830: INFO: (1) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 26.038696ms) Jan 22 14:06:01.830: INFO: (1) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname2/proxy/: bar (200; 26.161004ms) Jan 22 14:06:01.830: INFO: (1) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 26.347289ms) Jan 22 14:06:01.830: INFO: (1) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 26.819314ms) Jan 22 14:06:01.833: INFO: (1) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 29.401834ms) Jan 22 14:06:01.833: INFO: (1) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 29.779113ms) Jan 22 14:06:01.833: INFO: (1) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 29.535665ms) Jan 22 14:06:01.833: INFO: (1) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 29.804188ms) Jan 22 14:06:01.833: INFO: (1) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 29.67202ms) Jan 22 14:06:01.867: INFO: (2) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 33.012704ms) Jan 22 14:06:01.867: INFO: (2) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 33.576354ms) Jan 22 14:06:01.867: INFO: (2) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 33.282838ms) Jan 22 14:06:01.867: INFO: (2) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 33.377917ms) Jan 22 14:06:01.867: INFO: (2) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname1/proxy/: foo (200; 33.588442ms) Jan 22 14:06:01.870: INFO: (2) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 35.975431ms) Jan 22 14:06:01.870: INFO: (2) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 36.66326ms) Jan 22 14:06:01.870: INFO: (2) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 36.198653ms) Jan 22 14:06:01.870: INFO: (2) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 36.197505ms) Jan 22 14:06:01.870: INFO: (2) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 36.617734ms) Jan 22 14:06:01.871: INFO: (2) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname2/proxy/: bar (200; 37.388224ms) Jan 22 14:06:01.871: INFO: (2) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test<... (200; 14.186209ms) Jan 22 14:06:01.889: INFO: (3) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 15.398487ms) Jan 22 14:06:01.889: INFO: (3) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 14.601066ms) Jan 22 14:06:01.889: INFO: (3) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 15.786372ms) Jan 22 14:06:01.889: INFO: (3) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: ... (200; 7.719887ms) Jan 22 14:06:01.912: INFO: (4) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 9.479825ms) Jan 22 14:06:01.921: INFO: (4) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test (200; 23.795717ms) Jan 22 14:06:01.926: INFO: (4) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 24.16853ms) Jan 22 14:06:01.926: INFO: (4) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 24.270496ms) Jan 22 14:06:01.926: INFO: (4) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 24.338316ms) Jan 22 14:06:01.927: INFO: (4) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 24.617986ms) Jan 22 14:06:01.936: INFO: (5) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 9.638836ms) Jan 22 14:06:01.937: INFO: (5) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 9.766308ms) Jan 22 14:06:01.937: INFO: (5) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 9.838995ms) Jan 22 14:06:01.937: INFO: (5) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 10.060771ms) Jan 22 14:06:01.937: INFO: (5) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 9.955813ms) Jan 22 14:06:01.937: INFO: (5) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 10.176871ms) Jan 22 14:06:01.937: INFO: (5) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 10.188479ms) Jan 22 14:06:01.937: INFO: (5) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 10.305059ms) Jan 22 14:06:01.937: INFO: (5) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 10.332458ms) Jan 22 14:06:01.938: INFO: (5) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test<... (200; 11.566309ms) Jan 22 14:06:01.953: INFO: (6) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 11.875398ms) Jan 22 14:06:01.954: INFO: (6) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 12.737849ms) Jan 22 14:06:01.956: INFO: (6) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 15.170628ms) Jan 22 14:06:01.956: INFO: (6) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 15.201497ms) Jan 22 14:06:01.956: INFO: (6) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 15.164111ms) Jan 22 14:06:01.956: INFO: (6) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 15.360071ms) Jan 22 14:06:01.956: INFO: (6) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname1/proxy/: foo (200; 15.352172ms) Jan 22 14:06:01.957: INFO: (6) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 16.136904ms) Jan 22 14:06:01.958: INFO: (6) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 16.670921ms) Jan 22 14:06:01.958: INFO: (6) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 17.028249ms) Jan 22 14:06:01.958: INFO: (6) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname2/proxy/: bar (200; 17.116178ms) Jan 22 14:06:01.958: INFO: (6) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test<... (200; 7.434694ms) Jan 22 14:06:01.967: INFO: (7) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 7.901741ms) Jan 22 14:06:01.969: INFO: (7) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 9.297696ms) Jan 22 14:06:01.969: INFO: (7) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 9.65674ms) Jan 22 14:06:01.969: INFO: (7) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 9.690721ms) Jan 22 14:06:01.969: INFO: (7) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 9.318613ms) Jan 22 14:06:01.969: INFO: (7) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 9.550032ms) Jan 22 14:06:01.969: INFO: (7) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 10.047193ms) Jan 22 14:06:01.969: INFO: (7) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 10.125459ms) Jan 22 14:06:01.969: INFO: (7) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test<... (200; 14.538004ms) Jan 22 14:06:01.989: INFO: (8) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 14.537116ms) Jan 22 14:06:01.989: INFO: (8) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 14.540529ms) Jan 22 14:06:01.989: INFO: (8) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 14.509261ms) Jan 22 14:06:01.989: INFO: (8) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 14.591323ms) Jan 22 14:06:01.989: INFO: (8) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 14.6353ms) Jan 22 14:06:01.989: INFO: (8) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 14.74354ms) Jan 22 14:06:01.989: INFO: (8) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 14.702237ms) Jan 22 14:06:01.991: INFO: (8) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test (200; 6.673331ms) Jan 22 14:06:02.004: INFO: (9) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 7.718207ms) Jan 22 14:06:02.004: INFO: (9) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: ... (200; 15.849321ms) Jan 22 14:06:02.014: INFO: (9) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname1/proxy/: foo (200; 17.693305ms) Jan 22 14:06:02.015: INFO: (9) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 18.450639ms) Jan 22 14:06:02.016: INFO: (9) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 20.101083ms) Jan 22 14:06:02.016: INFO: (9) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 20.121229ms) Jan 22 14:06:02.017: INFO: (9) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 20.787749ms) Jan 22 14:06:02.017: INFO: (9) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname2/proxy/: bar (200; 21.000891ms) Jan 22 14:06:02.017: INFO: (9) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 20.931688ms) Jan 22 14:06:02.017: INFO: (9) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 21.081387ms) Jan 22 14:06:02.018: INFO: (9) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 21.403051ms) Jan 22 14:06:02.018: INFO: (9) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 21.465649ms) Jan 22 14:06:02.018: INFO: (9) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 21.704373ms) Jan 22 14:06:02.024: INFO: (10) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 6.194655ms) Jan 22 14:06:02.027: INFO: (10) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 9.435516ms) Jan 22 14:06:02.028: INFO: (10) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 9.583255ms) Jan 22 14:06:02.028: INFO: (10) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 10.277375ms) Jan 22 14:06:02.028: INFO: (10) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 10.387809ms) Jan 22 14:06:02.032: INFO: (10) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname2/proxy/: bar (200; 14.054598ms) Jan 22 14:06:02.033: INFO: (10) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 14.533492ms) Jan 22 14:06:02.036: INFO: (10) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 18.350191ms) Jan 22 14:06:02.037: INFO: (10) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 18.554349ms) Jan 22 14:06:02.037: INFO: (10) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 18.577699ms) Jan 22 14:06:02.037: INFO: (10) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 18.534213ms) Jan 22 14:06:02.037: INFO: (10) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname1/proxy/: foo (200; 18.56325ms) Jan 22 14:06:02.037: INFO: (10) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 18.595447ms) Jan 22 14:06:02.037: INFO: (10) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 18.544752ms) Jan 22 14:06:02.037: INFO: (10) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 18.550962ms) Jan 22 14:06:02.037: INFO: (10) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: ... (200; 12.164401ms) Jan 22 14:06:02.051: INFO: (11) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 14.373695ms) Jan 22 14:06:02.052: INFO: (11) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 15.007305ms) Jan 22 14:06:02.053: INFO: (11) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 15.749828ms) Jan 22 14:06:02.053: INFO: (11) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 15.859223ms) Jan 22 14:06:02.053: INFO: (11) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 15.697168ms) Jan 22 14:06:02.053: INFO: (11) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 15.93769ms) Jan 22 14:06:02.053: INFO: (11) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 16.373211ms) Jan 22 14:06:02.053: INFO: (11) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test<... (200; 17.056999ms) Jan 22 14:06:02.054: INFO: (11) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 17.575026ms) Jan 22 14:06:02.055: INFO: (11) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 17.718807ms) Jan 22 14:06:02.055: INFO: (11) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 18.220279ms) Jan 22 14:06:02.055: INFO: (11) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 18.335234ms) Jan 22 14:06:02.077: INFO: (12) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 21.609834ms) Jan 22 14:06:02.077: INFO: (12) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 21.537934ms) Jan 22 14:06:02.078: INFO: (12) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 21.70375ms) Jan 22 14:06:02.078: INFO: (12) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 22.192516ms) Jan 22 14:06:02.078: INFO: (12) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 21.865954ms) Jan 22 14:06:02.078: INFO: (12) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: ... (200; 22.979809ms) Jan 22 14:06:02.079: INFO: (12) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 23.660653ms) Jan 22 14:06:02.079: INFO: (12) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 23.498494ms) Jan 22 14:06:02.079: INFO: (12) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 23.518705ms) Jan 22 14:06:02.090: INFO: (12) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 34.428942ms) Jan 22 14:06:02.091: INFO: (12) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname2/proxy/: bar (200; 35.536431ms) Jan 22 14:06:02.091: INFO: (12) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 35.804352ms) Jan 22 14:06:02.092: INFO: (12) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 35.967031ms) Jan 22 14:06:02.092: INFO: (12) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname1/proxy/: foo (200; 36.025523ms) Jan 22 14:06:02.092: INFO: (12) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 36.039894ms) Jan 22 14:06:02.112: INFO: (13) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 20.12311ms) Jan 22 14:06:02.112: INFO: (13) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 20.229467ms) Jan 22 14:06:02.112: INFO: (13) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 20.436096ms) Jan 22 14:06:02.112: INFO: (13) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 20.400015ms) Jan 22 14:06:02.112: INFO: (13) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 20.214402ms) Jan 22 14:06:02.113: INFO: (13) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 20.949649ms) Jan 22 14:06:02.113: INFO: (13) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 21.189957ms) Jan 22 14:06:02.113: INFO: (13) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: ... (200; 14.166617ms) Jan 22 14:06:02.137: INFO: (14) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 13.569006ms) Jan 22 14:06:02.137: INFO: (14) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 14.190137ms) Jan 22 14:06:02.137: INFO: (14) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 14.766313ms) Jan 22 14:06:02.137: INFO: (14) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 14.691526ms) Jan 22 14:06:02.139: INFO: (14) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test<... (200; 16.038425ms) Jan 22 14:06:02.164: INFO: (15) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 16.083841ms) Jan 22 14:06:02.164: INFO: (15) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 16.617258ms) Jan 22 14:06:02.165: INFO: (15) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 17.008683ms) Jan 22 14:06:02.165: INFO: (15) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 17.369293ms) Jan 22 14:06:02.165: INFO: (15) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 17.566661ms) Jan 22 14:06:02.166: INFO: (15) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 18.290716ms) Jan 22 14:06:02.166: INFO: (15) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test (200; 5.096217ms) Jan 22 14:06:02.172: INFO: (16) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 6.06042ms) Jan 22 14:06:02.172: INFO: (16) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 6.176653ms) Jan 22 14:06:02.173: INFO: (16) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 7.241903ms) Jan 22 14:06:02.173: INFO: (16) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: ... (200; 7.708789ms) Jan 22 14:06:02.175: INFO: (16) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 8.627658ms) Jan 22 14:06:02.175: INFO: (16) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 8.674731ms) Jan 22 14:06:02.175: INFO: (16) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 9.17844ms) Jan 22 14:06:02.175: INFO: (16) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 9.471666ms) Jan 22 14:06:02.176: INFO: (16) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname1/proxy/: foo (200; 10.298741ms) Jan 22 14:06:02.177: INFO: (16) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname2/proxy/: bar (200; 10.603566ms) Jan 22 14:06:02.177: INFO: (16) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 10.826109ms) Jan 22 14:06:02.188: INFO: (17) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 10.777699ms) Jan 22 14:06:02.188: INFO: (17) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 10.887477ms) Jan 22 14:06:02.191: INFO: (17) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname1/proxy/: foo (200; 13.933932ms) Jan 22 14:06:02.191: INFO: (17) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 14.301819ms) Jan 22 14:06:02.191: INFO: (17) /api/v1/namespaces/proxy-6894/services/proxy-service-qbh9b:portname2/proxy/: bar (200; 14.316322ms) Jan 22 14:06:02.191: INFO: (17) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 14.334326ms) Jan 22 14:06:02.191: INFO: (17) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 14.402996ms) Jan 22 14:06:02.192: INFO: (17) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname1/proxy/: foo (200; 14.731545ms) Jan 22 14:06:02.193: INFO: (17) /api/v1/namespaces/proxy-6894/services/http:proxy-service-qbh9b:portname2/proxy/: bar (200; 15.84828ms) Jan 22 14:06:02.193: INFO: (17) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname1/proxy/: tls baz (200; 15.90325ms) Jan 22 14:06:02.193: INFO: (17) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 16.408879ms) Jan 22 14:06:02.194: INFO: (17) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 16.756646ms) Jan 22 14:06:02.194: INFO: (17) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 16.720372ms) Jan 22 14:06:02.194: INFO: (17) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: ... (200; 16.781972ms) Jan 22 14:06:02.194: INFO: (17) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 16.792712ms) Jan 22 14:06:02.198: INFO: (18) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:462/proxy/: tls qux (200; 3.680235ms) Jan 22 14:06:02.205: INFO: (18) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 10.818752ms) Jan 22 14:06:02.206: INFO: (18) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 11.822067ms) Jan 22 14:06:02.206: INFO: (18) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 11.921495ms) Jan 22 14:06:02.206: INFO: (18) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 12.045903ms) Jan 22 14:06:02.206: INFO: (18) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 12.084264ms) Jan 22 14:06:02.206: INFO: (18) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 12.241697ms) Jan 22 14:06:02.206: INFO: (18) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 12.276588ms) Jan 22 14:06:02.208: INFO: (18) /api/v1/namespaces/proxy-6894/services/https:proxy-service-qbh9b:tlsportname2/proxy/: tls qux (200; 14.309821ms) Jan 22 14:06:02.208: INFO: (18) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: test (200; 23.786225ms) Jan 22 14:06:02.229: INFO: (19) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 11.002266ms) Jan 22 14:06:02.231: INFO: (19) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:1080/proxy/: test<... (200; 13.738595ms) Jan 22 14:06:02.231: INFO: (19) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw/proxy/: test (200; 13.864883ms) Jan 22 14:06:02.234: INFO: (19) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:162/proxy/: bar (200; 16.316675ms) Jan 22 14:06:02.235: INFO: (19) /api/v1/namespaces/proxy-6894/pods/http:proxy-service-qbh9b-5kbkw:1080/proxy/: ... (200; 17.674607ms) Jan 22 14:06:02.236: INFO: (19) /api/v1/namespaces/proxy-6894/pods/proxy-service-qbh9b-5kbkw:160/proxy/: foo (200; 18.051167ms) Jan 22 14:06:02.236: INFO: (19) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:460/proxy/: tls baz (200; 18.247239ms) Jan 22 14:06:02.236: INFO: (19) /api/v1/namespaces/proxy-6894/pods/https:proxy-service-qbh9b-5kbkw:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jan 22 14:06:22.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7021' Jan 22 14:06:23.254: INFO: stderr: "" Jan 22 14:06:23.254: INFO: stdout: "pod/pause created\n" Jan 22 14:06:23.254: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 22 14:06:23.254: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7021" to be "running and ready" Jan 22 14:06:23.268: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.640038ms Jan 22 14:06:25.277: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022286835s Jan 22 14:06:27.283: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028372289s Jan 22 14:06:29.290: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036030022s Jan 22 14:06:31.297: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.043019934s Jan 22 14:06:31.297: INFO: Pod "pause" satisfied condition "running and ready" Jan 22 14:06:31.298: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jan 22 14:06:31.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7021' Jan 22 14:06:31.436: INFO: stderr: "" Jan 22 14:06:31.436: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 22 14:06:31.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7021' Jan 22 14:06:31.575: INFO: stderr: "" Jan 22 14:06:31.575: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 22 14:06:31.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7021' Jan 22 14:06:31.659: INFO: stderr: "" Jan 22 14:06:31.659: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 22 14:06:31.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7021' Jan 22 14:06:31.730: INFO: stderr: "" Jan 22 14:06:31.730: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jan 22 14:06:31.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7021' Jan 22 14:06:31.866: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 14:06:31.866: INFO: stdout: "pod \"pause\" force deleted\n" Jan 22 14:06:31.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7021' Jan 22 14:06:32.015: INFO: stderr: "No resources found.\n" Jan 22 14:06:32.015: INFO: stdout: "" Jan 22 14:06:32.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7021 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 14:06:32.141: INFO: stderr: "" Jan 22 14:06:32.141: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:06:32.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7021" for this suite. Jan 22 14:06:38.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:06:38.362: INFO: namespace kubectl-7021 deletion completed in 6.17948284s • [SLOW TEST:15.483 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:06:38.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0122 14:06:51.663527 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 22 14:06:51.663: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:06:51.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7132" for this suite. Jan 22 14:07:04.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:07:04.248: INFO: namespace gc-7132 deletion completed in 12.237640053s • [SLOW TEST:25.886 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:07:04.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-6210 I0122 14:07:04.431145 9 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6210, replica count: 1 I0122 14:07:05.481993 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:07:06.482348 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:07:07.482848 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:07:08.483169 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:07:09.483581 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:07:10.484125 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:07:11.484382 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:07:12.484762 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 14:07:13.485110 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 22 14:07:13.649: INFO: Created: latency-svc-vsx44 Jan 22 14:07:13.671: INFO: Got endpoints: latency-svc-vsx44 [86.027151ms] Jan 22 14:07:13.834: INFO: Created: latency-svc-hrc7l Jan 22 14:07:13.846: INFO: Got endpoints: latency-svc-hrc7l [174.743474ms] Jan 22 14:07:13.962: INFO: Created: latency-svc-l85g9 Jan 22 14:07:13.982: INFO: Got endpoints: latency-svc-l85g9 [309.627443ms] Jan 22 14:07:14.049: INFO: Created: latency-svc-2wd98 Jan 22 14:07:14.173: INFO: Got endpoints: latency-svc-2wd98 [500.606426ms] Jan 22 14:07:14.186: INFO: Created: latency-svc-bn2h6 Jan 22 14:07:14.204: INFO: Got endpoints: latency-svc-bn2h6 [532.762184ms] Jan 22 14:07:14.263: INFO: Created: latency-svc-95xdz Jan 22 14:07:14.373: INFO: Got endpoints: latency-svc-95xdz [700.574001ms] Jan 22 14:07:14.414: INFO: Created: latency-svc-z4rff Jan 22 14:07:14.436: INFO: Got endpoints: latency-svc-z4rff [764.503925ms] Jan 22 14:07:14.550: INFO: Created: latency-svc-gbg7h Jan 22 14:07:14.564: INFO: Got endpoints: latency-svc-gbg7h [892.145566ms] Jan 22 14:07:14.627: INFO: Created: latency-svc-5rj72 Jan 22 14:07:14.629: INFO: Got endpoints: latency-svc-5rj72 [957.128146ms] Jan 22 14:07:14.723: INFO: Created: latency-svc-nndh4 Jan 22 14:07:14.737: INFO: Got endpoints: latency-svc-nndh4 [1.064914961s] Jan 22 14:07:14.771: INFO: Created: latency-svc-spzvd Jan 22 14:07:14.789: INFO: Got endpoints: latency-svc-spzvd [1.116105799s] Jan 22 14:07:14.888: INFO: Created: latency-svc-bn8x5 Jan 22 14:07:14.913: INFO: Got endpoints: latency-svc-bn8x5 [1.240766181s] Jan 22 14:07:14.960: INFO: Created: latency-svc-cpwh2 Jan 22 14:07:14.972: INFO: Got endpoints: latency-svc-cpwh2 [1.299161821s] Jan 22 14:07:15.090: INFO: Created: latency-svc-h6vjf Jan 22 14:07:15.102: INFO: Got endpoints: latency-svc-h6vjf [1.429822825s] Jan 22 14:07:15.155: INFO: Created: latency-svc-44vlv Jan 22 14:07:15.169: INFO: Got endpoints: latency-svc-44vlv [1.496903628s] Jan 22 14:07:15.276: INFO: Created: latency-svc-pqzwt Jan 22 14:07:15.362: INFO: Created: latency-svc-l95ld Jan 22 14:07:15.373: INFO: Got endpoints: latency-svc-pqzwt [1.701032847s] Jan 22 14:07:15.541: INFO: Got endpoints: latency-svc-l95ld [1.694450206s] Jan 22 14:07:15.551: INFO: Created: latency-svc-4d8fz Jan 22 14:07:15.560: INFO: Got endpoints: latency-svc-4d8fz [1.578260998s] Jan 22 14:07:15.625: INFO: Created: latency-svc-nhdks Jan 22 14:07:15.732: INFO: Got endpoints: latency-svc-nhdks [1.558656375s] Jan 22 14:07:15.765: INFO: Created: latency-svc-snn74 Jan 22 14:07:15.787: INFO: Got endpoints: latency-svc-snn74 [1.58236084s] Jan 22 14:07:15.837: INFO: Created: latency-svc-dj66j Jan 22 14:07:15.926: INFO: Got endpoints: latency-svc-dj66j [1.553026542s] Jan 22 14:07:15.961: INFO: Created: latency-svc-5k449 Jan 22 14:07:15.962: INFO: Got endpoints: latency-svc-5k449 [175.417966ms] Jan 22 14:07:16.086: INFO: Created: latency-svc-hmlnx Jan 22 14:07:16.098: INFO: Got endpoints: latency-svc-hmlnx [1.661836776s] Jan 22 14:07:16.172: INFO: Created: latency-svc-tbpz7 Jan 22 14:07:16.292: INFO: Got endpoints: latency-svc-tbpz7 [1.728045963s] Jan 22 14:07:16.307: INFO: Created: latency-svc-zhlpm Jan 22 14:07:16.319: INFO: Got endpoints: latency-svc-zhlpm [1.690015582s] Jan 22 14:07:16.378: INFO: Created: latency-svc-pjzjg Jan 22 14:07:16.387: INFO: Got endpoints: latency-svc-pjzjg [1.649562469s] Jan 22 14:07:16.502: INFO: Created: latency-svc-xtth4 Jan 22 14:07:16.508: INFO: Got endpoints: latency-svc-xtth4 [1.719573683s] Jan 22 14:07:16.588: INFO: Created: latency-svc-669zb Jan 22 14:07:16.666: INFO: Got endpoints: latency-svc-669zb [1.753140588s] Jan 22 14:07:16.683: INFO: Created: latency-svc-vd22c Jan 22 14:07:16.683: INFO: Got endpoints: latency-svc-vd22c [1.711172217s] Jan 22 14:07:16.721: INFO: Created: latency-svc-bvmhm Jan 22 14:07:16.732: INFO: Got endpoints: latency-svc-bvmhm [1.629438644s] Jan 22 14:07:16.840: INFO: Created: latency-svc-x6jms Jan 22 14:07:16.923: INFO: Got endpoints: latency-svc-x6jms [1.753522194s] Jan 22 14:07:16.927: INFO: Created: latency-svc-d6hqw Jan 22 14:07:17.009: INFO: Created: latency-svc-wz8v7 Jan 22 14:07:17.009: INFO: Got endpoints: latency-svc-d6hqw [1.635925514s] Jan 22 14:07:17.017: INFO: Got endpoints: latency-svc-wz8v7 [1.475381244s] Jan 22 14:07:17.066: INFO: Created: latency-svc-2h4x7 Jan 22 14:07:17.077: INFO: Got endpoints: latency-svc-2h4x7 [1.516413022s] Jan 22 14:07:17.152: INFO: Created: latency-svc-hvfzg Jan 22 14:07:17.160: INFO: Got endpoints: latency-svc-hvfzg [1.428094665s] Jan 22 14:07:17.208: INFO: Created: latency-svc-kg62c Jan 22 14:07:17.358: INFO: Got endpoints: latency-svc-kg62c [1.431350559s] Jan 22 14:07:17.365: INFO: Created: latency-svc-2zgxm Jan 22 14:07:17.366: INFO: Got endpoints: latency-svc-2zgxm [1.403486043s] Jan 22 14:07:17.460: INFO: Created: latency-svc-xrlpf Jan 22 14:07:17.460: INFO: Got endpoints: latency-svc-xrlpf [1.361658501s] Jan 22 14:07:17.540: INFO: Created: latency-svc-lv55j Jan 22 14:07:17.552: INFO: Got endpoints: latency-svc-lv55j [1.259396565s] Jan 22 14:07:17.597: INFO: Created: latency-svc-9575f Jan 22 14:07:17.625: INFO: Got endpoints: latency-svc-9575f [1.305610524s] Jan 22 14:07:17.743: INFO: Created: latency-svc-bz4lq Jan 22 14:07:17.747: INFO: Got endpoints: latency-svc-bz4lq [1.360419826s] Jan 22 14:07:17.790: INFO: Created: latency-svc-zq524 Jan 22 14:07:17.807: INFO: Got endpoints: latency-svc-zq524 [1.298507227s] Jan 22 14:07:17.946: INFO: Created: latency-svc-tnd5p Jan 22 14:07:17.963: INFO: Got endpoints: latency-svc-tnd5p [1.296448312s] Jan 22 14:07:17.997: INFO: Created: latency-svc-j9pfj Jan 22 14:07:18.002: INFO: Got endpoints: latency-svc-j9pfj [1.318706124s] Jan 22 14:07:18.102: INFO: Created: latency-svc-76877 Jan 22 14:07:18.118: INFO: Got endpoints: latency-svc-76877 [1.385817523s] Jan 22 14:07:18.153: INFO: Created: latency-svc-d58m5 Jan 22 14:07:18.165: INFO: Got endpoints: latency-svc-d58m5 [1.241858559s] Jan 22 14:07:18.209: INFO: Created: latency-svc-6w58z Jan 22 14:07:18.265: INFO: Got endpoints: latency-svc-6w58z [1.255581314s] Jan 22 14:07:18.282: INFO: Created: latency-svc-j8sxz Jan 22 14:07:18.293: INFO: Got endpoints: latency-svc-j8sxz [1.276112674s] Jan 22 14:07:18.336: INFO: Created: latency-svc-gm4gh Jan 22 14:07:18.425: INFO: Got endpoints: latency-svc-gm4gh [1.348792487s] Jan 22 14:07:18.458: INFO: Created: latency-svc-86lrn Jan 22 14:07:18.466: INFO: Got endpoints: latency-svc-86lrn [1.305602928s] Jan 22 14:07:18.521: INFO: Created: latency-svc-d2pjr Jan 22 14:07:18.618: INFO: Created: latency-svc-zv8cp Jan 22 14:07:18.628: INFO: Got endpoints: latency-svc-d2pjr [1.270231413s] Jan 22 14:07:18.630: INFO: Got endpoints: latency-svc-zv8cp [1.263835461s] Jan 22 14:07:18.685: INFO: Created: latency-svc-6jcbl Jan 22 14:07:18.696: INFO: Got endpoints: latency-svc-6jcbl [1.236554915s] Jan 22 14:07:18.827: INFO: Created: latency-svc-7svwm Jan 22 14:07:18.838: INFO: Got endpoints: latency-svc-7svwm [1.285521682s] Jan 22 14:07:18.890: INFO: Created: latency-svc-lcs8h Jan 22 14:07:18.964: INFO: Got endpoints: latency-svc-lcs8h [1.339163334s] Jan 22 14:07:19.012: INFO: Created: latency-svc-bdpdv Jan 22 14:07:19.028: INFO: Got endpoints: latency-svc-bdpdv [1.280755942s] Jan 22 14:07:19.166: INFO: Created: latency-svc-9wxs6 Jan 22 14:07:19.177: INFO: Got endpoints: latency-svc-9wxs6 [1.370230609s] Jan 22 14:07:19.225: INFO: Created: latency-svc-4qd8m Jan 22 14:07:19.237: INFO: Got endpoints: latency-svc-4qd8m [1.273690614s] Jan 22 14:07:19.330: INFO: Created: latency-svc-8tvpr Jan 22 14:07:19.346: INFO: Got endpoints: latency-svc-8tvpr [1.344432926s] Jan 22 14:07:19.396: INFO: Created: latency-svc-2zxww Jan 22 14:07:19.410: INFO: Got endpoints: latency-svc-2zxww [1.292385368s] Jan 22 14:07:19.607: INFO: Created: latency-svc-4mq5s Jan 22 14:07:19.646: INFO: Created: latency-svc-9kg7l Jan 22 14:07:19.650: INFO: Got endpoints: latency-svc-4mq5s [1.485556066s] Jan 22 14:07:19.652: INFO: Got endpoints: latency-svc-9kg7l [1.386585862s] Jan 22 14:07:19.797: INFO: Created: latency-svc-f5nfz Jan 22 14:07:19.811: INFO: Got endpoints: latency-svc-f5nfz [1.518210782s] Jan 22 14:07:19.885: INFO: Created: latency-svc-5hl5p Jan 22 14:07:19.898: INFO: Got endpoints: latency-svc-5hl5p [1.472157655s] Jan 22 14:07:19.977: INFO: Created: latency-svc-2kbv2 Jan 22 14:07:19.989: INFO: Got endpoints: latency-svc-2kbv2 [1.523525592s] Jan 22 14:07:20.044: INFO: Created: latency-svc-c4br9 Jan 22 14:07:20.159: INFO: Got endpoints: latency-svc-c4br9 [1.530713981s] Jan 22 14:07:20.167: INFO: Created: latency-svc-448rg Jan 22 14:07:20.209: INFO: Created: latency-svc-n9qz5 Jan 22 14:07:20.210: INFO: Got endpoints: latency-svc-448rg [1.580290704s] Jan 22 14:07:20.235: INFO: Got endpoints: latency-svc-n9qz5 [1.538844769s] Jan 22 14:07:20.328: INFO: Created: latency-svc-74mwr Jan 22 14:07:20.328: INFO: Got endpoints: latency-svc-74mwr [1.490398795s] Jan 22 14:07:20.392: INFO: Created: latency-svc-wdhkf Jan 22 14:07:20.412: INFO: Got endpoints: latency-svc-wdhkf [1.447111097s] Jan 22 14:07:20.510: INFO: Created: latency-svc-x7bp8 Jan 22 14:07:20.568: INFO: Got endpoints: latency-svc-x7bp8 [1.539210214s] Jan 22 14:07:20.581: INFO: Created: latency-svc-vr7sr Jan 22 14:07:20.584: INFO: Got endpoints: latency-svc-vr7sr [1.407131453s] Jan 22 14:07:20.693: INFO: Created: latency-svc-4gcmd Jan 22 14:07:20.702: INFO: Got endpoints: latency-svc-4gcmd [1.464858246s] Jan 22 14:07:20.862: INFO: Created: latency-svc-h2bs2 Jan 22 14:07:20.896: INFO: Got endpoints: latency-svc-h2bs2 [1.550006415s] Jan 22 14:07:20.960: INFO: Created: latency-svc-xhl8t Jan 22 14:07:21.020: INFO: Got endpoints: latency-svc-xhl8t [1.609478667s] Jan 22 14:07:21.073: INFO: Created: latency-svc-h9zrr Jan 22 14:07:21.092: INFO: Got endpoints: latency-svc-h9zrr [1.441469854s] Jan 22 14:07:21.240: INFO: Created: latency-svc-kb62h Jan 22 14:07:21.266: INFO: Got endpoints: latency-svc-kb62h [1.614050766s] Jan 22 14:07:21.300: INFO: Created: latency-svc-xbfnb Jan 22 14:07:21.303: INFO: Got endpoints: latency-svc-xbfnb [1.491876241s] Jan 22 14:07:21.472: INFO: Created: latency-svc-2n8pw Jan 22 14:07:21.475: INFO: Got endpoints: latency-svc-2n8pw [1.577428288s] Jan 22 14:07:21.547: INFO: Created: latency-svc-nm2cm Jan 22 14:07:21.556: INFO: Got endpoints: latency-svc-nm2cm [1.566652795s] Jan 22 14:07:21.734: INFO: Created: latency-svc-5zpj7 Jan 22 14:07:21.746: INFO: Got endpoints: latency-svc-5zpj7 [1.587129305s] Jan 22 14:07:21.948: INFO: Created: latency-svc-2cx5m Jan 22 14:07:21.955: INFO: Got endpoints: latency-svc-2cx5m [1.744568606s] Jan 22 14:07:22.020: INFO: Created: latency-svc-hztlj Jan 22 14:07:22.175: INFO: Got endpoints: latency-svc-hztlj [1.939172274s] Jan 22 14:07:22.186: INFO: Created: latency-svc-xf2mg Jan 22 14:07:22.193: INFO: Got endpoints: latency-svc-xf2mg [1.864703543s] Jan 22 14:07:22.232: INFO: Created: latency-svc-zkxng Jan 22 14:07:22.251: INFO: Got endpoints: latency-svc-zkxng [1.839097411s] Jan 22 14:07:22.349: INFO: Created: latency-svc-lrzdh Jan 22 14:07:22.397: INFO: Created: latency-svc-g5sls Jan 22 14:07:22.398: INFO: Got endpoints: latency-svc-lrzdh [1.829618139s] Jan 22 14:07:22.423: INFO: Got endpoints: latency-svc-g5sls [1.838347329s] Jan 22 14:07:22.541: INFO: Created: latency-svc-sshnc Jan 22 14:07:22.548: INFO: Got endpoints: latency-svc-sshnc [1.845953304s] Jan 22 14:07:22.616: INFO: Created: latency-svc-kn9tp Jan 22 14:07:22.625: INFO: Got endpoints: latency-svc-kn9tp [1.728335911s] Jan 22 14:07:22.730: INFO: Created: latency-svc-vpfdk Jan 22 14:07:22.750: INFO: Got endpoints: latency-svc-vpfdk [1.730328551s] Jan 22 14:07:22.802: INFO: Created: latency-svc-5mnff Jan 22 14:07:22.913: INFO: Got endpoints: latency-svc-5mnff [1.820834718s] Jan 22 14:07:22.923: INFO: Created: latency-svc-t5txp Jan 22 14:07:22.935: INFO: Got endpoints: latency-svc-t5txp [1.669439822s] Jan 22 14:07:22.998: INFO: Created: latency-svc-scfm7 Jan 22 14:07:23.002: INFO: Got endpoints: latency-svc-scfm7 [1.698759374s] Jan 22 14:07:23.105: INFO: Created: latency-svc-47cdf Jan 22 14:07:23.119: INFO: Got endpoints: latency-svc-47cdf [1.643385851s] Jan 22 14:07:23.158: INFO: Created: latency-svc-kbr5c Jan 22 14:07:23.196: INFO: Got endpoints: latency-svc-kbr5c [1.63991641s] Jan 22 14:07:23.374: INFO: Created: latency-svc-ws2q2 Jan 22 14:07:23.390: INFO: Got endpoints: latency-svc-ws2q2 [1.644114202s] Jan 22 14:07:23.626: INFO: Created: latency-svc-65ntr Jan 22 14:07:23.636: INFO: Got endpoints: latency-svc-65ntr [1.6808207s] Jan 22 14:07:23.702: INFO: Created: latency-svc-9v7jm Jan 22 14:07:23.832: INFO: Got endpoints: latency-svc-9v7jm [1.657439812s] Jan 22 14:07:23.894: INFO: Created: latency-svc-vsvtp Jan 22 14:07:23.894: INFO: Got endpoints: latency-svc-vsvtp [1.701345034s] Jan 22 14:07:24.061: INFO: Created: latency-svc-4b4dv Jan 22 14:07:24.062: INFO: Got endpoints: latency-svc-4b4dv [1.810816152s] Jan 22 14:07:24.135: INFO: Created: latency-svc-msjsb Jan 22 14:07:24.218: INFO: Got endpoints: latency-svc-msjsb [1.820129627s] Jan 22 14:07:24.229: INFO: Created: latency-svc-mcnzn Jan 22 14:07:24.234: INFO: Got endpoints: latency-svc-mcnzn [1.810788758s] Jan 22 14:07:24.292: INFO: Created: latency-svc-4bsv5 Jan 22 14:07:24.301: INFO: Got endpoints: latency-svc-4bsv5 [1.753068811s] Jan 22 14:07:24.419: INFO: Created: latency-svc-8dx7s Jan 22 14:07:24.419: INFO: Got endpoints: latency-svc-8dx7s [1.793746567s] Jan 22 14:07:24.479: INFO: Created: latency-svc-c247x Jan 22 14:07:24.492: INFO: Got endpoints: latency-svc-c247x [1.741250802s] Jan 22 14:07:24.683: INFO: Created: latency-svc-cwbkz Jan 22 14:07:24.697: INFO: Got endpoints: latency-svc-cwbkz [1.783544019s] Jan 22 14:07:24.748: INFO: Created: latency-svc-62zpx Jan 22 14:07:24.748: INFO: Got endpoints: latency-svc-62zpx [1.812256331s] Jan 22 14:07:24.912: INFO: Created: latency-svc-gx5vs Jan 22 14:07:24.970: INFO: Created: latency-svc-rb99c Jan 22 14:07:24.971: INFO: Got endpoints: latency-svc-gx5vs [1.968800669s] Jan 22 14:07:24.976: INFO: Got endpoints: latency-svc-rb99c [1.857568661s] Jan 22 14:07:25.104: INFO: Created: latency-svc-658dq Jan 22 14:07:25.128: INFO: Got endpoints: latency-svc-658dq [1.931573356s] Jan 22 14:07:25.168: INFO: Created: latency-svc-cwjvx Jan 22 14:07:25.173: INFO: Got endpoints: latency-svc-cwjvx [1.782314021s] Jan 22 14:07:25.268: INFO: Created: latency-svc-x5x6m Jan 22 14:07:25.272: INFO: Got endpoints: latency-svc-x5x6m [1.636112908s] Jan 22 14:07:25.329: INFO: Created: latency-svc-226jb Jan 22 14:07:25.330: INFO: Got endpoints: latency-svc-226jb [1.497764725s] Jan 22 14:07:25.444: INFO: Created: latency-svc-jd8xm Jan 22 14:07:25.449: INFO: Got endpoints: latency-svc-jd8xm [1.554457872s] Jan 22 14:07:25.497: INFO: Created: latency-svc-7nhcs Jan 22 14:07:25.507: INFO: Got endpoints: latency-svc-7nhcs [1.445487918s] Jan 22 14:07:25.626: INFO: Created: latency-svc-c2xcv Jan 22 14:07:25.652: INFO: Created: latency-svc-dzjkm Jan 22 14:07:25.653: INFO: Got endpoints: latency-svc-c2xcv [1.434576421s] Jan 22 14:07:25.662: INFO: Got endpoints: latency-svc-dzjkm [1.428226511s] Jan 22 14:07:25.715: INFO: Created: latency-svc-sclkx Jan 22 14:07:25.796: INFO: Got endpoints: latency-svc-sclkx [1.494793123s] Jan 22 14:07:25.828: INFO: Created: latency-svc-lj54k Jan 22 14:07:25.896: INFO: Got endpoints: latency-svc-lj54k [1.47660805s] Jan 22 14:07:26.139: INFO: Created: latency-svc-dp59v Jan 22 14:07:26.151: INFO: Got endpoints: latency-svc-dp59v [1.658833277s] Jan 22 14:07:26.261: INFO: Created: latency-svc-6hdgc Jan 22 14:07:26.272: INFO: Got endpoints: latency-svc-6hdgc [1.575386021s] Jan 22 14:07:26.344: INFO: Created: latency-svc-lxwkt Jan 22 14:07:26.354: INFO: Got endpoints: latency-svc-lxwkt [1.60578811s] Jan 22 14:07:26.439: INFO: Created: latency-svc-p8zgg Jan 22 14:07:26.452: INFO: Got endpoints: latency-svc-p8zgg [1.480981086s] Jan 22 14:07:26.490: INFO: Created: latency-svc-bf9hp Jan 22 14:07:26.505: INFO: Got endpoints: latency-svc-bf9hp [1.528150477s] Jan 22 14:07:26.629: INFO: Created: latency-svc-z98dg Jan 22 14:07:26.675: INFO: Got endpoints: latency-svc-z98dg [1.546876811s] Jan 22 14:07:26.682: INFO: Created: latency-svc-7d4kz Jan 22 14:07:26.687: INFO: Got endpoints: latency-svc-7d4kz [1.51428197s] Jan 22 14:07:26.793: INFO: Created: latency-svc-hv8wr Jan 22 14:07:26.803: INFO: Got endpoints: latency-svc-hv8wr [1.530852546s] Jan 22 14:07:26.832: INFO: Created: latency-svc-hxxzw Jan 22 14:07:26.841: INFO: Got endpoints: latency-svc-hxxzw [1.510939569s] Jan 22 14:07:26.876: INFO: Created: latency-svc-hwfbj Jan 22 14:07:26.882: INFO: Got endpoints: latency-svc-hwfbj [1.433223618s] Jan 22 14:07:27.009: INFO: Created: latency-svc-ng222 Jan 22 14:07:27.042: INFO: Got endpoints: latency-svc-ng222 [1.534878976s] Jan 22 14:07:27.045: INFO: Created: latency-svc-nxqw2 Jan 22 14:07:27.057: INFO: Got endpoints: latency-svc-nxqw2 [1.404606631s] Jan 22 14:07:27.165: INFO: Created: latency-svc-pn2f6 Jan 22 14:07:27.168: INFO: Got endpoints: latency-svc-pn2f6 [1.505631071s] Jan 22 14:07:27.233: INFO: Created: latency-svc-pwzpn Jan 22 14:07:27.251: INFO: Got endpoints: latency-svc-pwzpn [1.454781516s] Jan 22 14:07:27.326: INFO: Created: latency-svc-zh8mk Jan 22 14:07:27.337: INFO: Got endpoints: latency-svc-zh8mk [1.440796419s] Jan 22 14:07:27.383: INFO: Created: latency-svc-vgftp Jan 22 14:07:27.388: INFO: Got endpoints: latency-svc-vgftp [1.236558253s] Jan 22 14:07:27.520: INFO: Created: latency-svc-p5fn2 Jan 22 14:07:27.525: INFO: Got endpoints: latency-svc-p5fn2 [1.252937533s] Jan 22 14:07:27.586: INFO: Created: latency-svc-9krhx Jan 22 14:07:27.587: INFO: Got endpoints: latency-svc-9krhx [1.233328376s] Jan 22 14:07:27.665: INFO: Created: latency-svc-5sn4z Jan 22 14:07:27.678: INFO: Got endpoints: latency-svc-5sn4z [1.226319543s] Jan 22 14:07:27.756: INFO: Created: latency-svc-bfhfx Jan 22 14:07:27.882: INFO: Got endpoints: latency-svc-bfhfx [1.376918628s] Jan 22 14:07:27.905: INFO: Created: latency-svc-xjs86 Jan 22 14:07:27.908: INFO: Got endpoints: latency-svc-xjs86 [1.232953434s] Jan 22 14:07:27.947: INFO: Created: latency-svc-vtk28 Jan 22 14:07:27.961: INFO: Got endpoints: latency-svc-vtk28 [1.2737596s] Jan 22 14:07:28.091: INFO: Created: latency-svc-sggjx Jan 22 14:07:28.093: INFO: Got endpoints: latency-svc-sggjx [1.289657005s] Jan 22 14:07:28.164: INFO: Created: latency-svc-s5b2z Jan 22 14:07:28.282: INFO: Got endpoints: latency-svc-s5b2z [1.440477067s] Jan 22 14:07:28.299: INFO: Created: latency-svc-6lk44 Jan 22 14:07:28.310: INFO: Got endpoints: latency-svc-6lk44 [1.427533632s] Jan 22 14:07:28.331: INFO: Created: latency-svc-dt2hj Jan 22 14:07:28.336: INFO: Got endpoints: latency-svc-dt2hj [1.293118681s] Jan 22 14:07:28.376: INFO: Created: latency-svc-fvzjh Jan 22 14:07:28.380: INFO: Got endpoints: latency-svc-fvzjh [1.32277056s] Jan 22 14:07:28.473: INFO: Created: latency-svc-fpj8j Jan 22 14:07:28.478: INFO: Got endpoints: latency-svc-fpj8j [1.31000896s] Jan 22 14:07:28.547: INFO: Created: latency-svc-79m92 Jan 22 14:07:28.637: INFO: Got endpoints: latency-svc-79m92 [1.385295097s] Jan 22 14:07:28.655: INFO: Created: latency-svc-72gjz Jan 22 14:07:28.662: INFO: Got endpoints: latency-svc-72gjz [1.324972554s] Jan 22 14:07:28.717: INFO: Created: latency-svc-d7dx2 Jan 22 14:07:28.826: INFO: Got endpoints: latency-svc-d7dx2 [1.437950631s] Jan 22 14:07:28.831: INFO: Created: latency-svc-2qlrh Jan 22 14:07:28.845: INFO: Got endpoints: latency-svc-2qlrh [1.320150522s] Jan 22 14:07:28.892: INFO: Created: latency-svc-9r2nw Jan 22 14:07:28.900: INFO: Got endpoints: latency-svc-9r2nw [1.313131313s] Jan 22 14:07:29.045: INFO: Created: latency-svc-cj6z5 Jan 22 14:07:29.056: INFO: Got endpoints: latency-svc-cj6z5 [1.378074929s] Jan 22 14:07:29.115: INFO: Created: latency-svc-vmp67 Jan 22 14:07:29.128: INFO: Got endpoints: latency-svc-vmp67 [1.245928806s] Jan 22 14:07:29.267: INFO: Created: latency-svc-gmrmw Jan 22 14:07:29.268: INFO: Got endpoints: latency-svc-gmrmw [1.35967541s] Jan 22 14:07:29.472: INFO: Created: latency-svc-dqzvw Jan 22 14:07:29.481: INFO: Got endpoints: latency-svc-dqzvw [1.519990771s] Jan 22 14:07:29.529: INFO: Created: latency-svc-xhk8g Jan 22 14:07:29.533: INFO: Got endpoints: latency-svc-xhk8g [1.440568666s] Jan 22 14:07:29.714: INFO: Created: latency-svc-tl28n Jan 22 14:07:29.754: INFO: Got endpoints: latency-svc-tl28n [1.471385421s] Jan 22 14:07:29.931: INFO: Created: latency-svc-z9c67 Jan 22 14:07:29.945: INFO: Got endpoints: latency-svc-z9c67 [1.635038164s] Jan 22 14:07:29.994: INFO: Created: latency-svc-5fbtd Jan 22 14:07:30.000: INFO: Got endpoints: latency-svc-5fbtd [1.664544814s] Jan 22 14:07:30.181: INFO: Created: latency-svc-vdgmc Jan 22 14:07:30.230: INFO: Got endpoints: latency-svc-vdgmc [1.850104375s] Jan 22 14:07:30.238: INFO: Created: latency-svc-qkwgf Jan 22 14:07:30.269: INFO: Got endpoints: latency-svc-qkwgf [1.790761908s] Jan 22 14:07:30.329: INFO: Created: latency-svc-gm4vm Jan 22 14:07:30.398: INFO: Got endpoints: latency-svc-gm4vm [1.760771093s] Jan 22 14:07:30.413: INFO: Created: latency-svc-n7m2w Jan 22 14:07:30.475: INFO: Got endpoints: latency-svc-n7m2w [1.813358292s] Jan 22 14:07:30.477: INFO: Created: latency-svc-4xt4d Jan 22 14:07:30.509: INFO: Got endpoints: latency-svc-4xt4d [1.683406993s] Jan 22 14:07:30.541: INFO: Created: latency-svc-pcc6n Jan 22 14:07:30.553: INFO: Got endpoints: latency-svc-pcc6n [1.70804874s] Jan 22 14:07:30.634: INFO: Created: latency-svc-bw7sx Jan 22 14:07:30.659: INFO: Got endpoints: latency-svc-bw7sx [1.758109548s] Jan 22 14:07:30.732: INFO: Created: latency-svc-c8xqt Jan 22 14:07:30.774: INFO: Got endpoints: latency-svc-c8xqt [1.717796843s] Jan 22 14:07:30.800: INFO: Created: latency-svc-rvhv5 Jan 22 14:07:30.819: INFO: Got endpoints: latency-svc-rvhv5 [1.691197923s] Jan 22 14:07:30.867: INFO: Created: latency-svc-g6cv9 Jan 22 14:07:30.962: INFO: Got endpoints: latency-svc-g6cv9 [1.693846824s] Jan 22 14:07:31.002: INFO: Created: latency-svc-zflxf Jan 22 14:07:31.012: INFO: Got endpoints: latency-svc-zflxf [1.530895652s] Jan 22 14:07:31.166: INFO: Created: latency-svc-wlb9d Jan 22 14:07:31.181: INFO: Got endpoints: latency-svc-wlb9d [1.647732727s] Jan 22 14:07:31.214: INFO: Created: latency-svc-8nb8b Jan 22 14:07:31.228: INFO: Got endpoints: latency-svc-8nb8b [1.474619707s] Jan 22 14:07:31.263: INFO: Created: latency-svc-5tptf Jan 22 14:07:31.350: INFO: Got endpoints: latency-svc-5tptf [1.404817417s] Jan 22 14:07:31.382: INFO: Created: latency-svc-t49t5 Jan 22 14:07:31.389: INFO: Got endpoints: latency-svc-t49t5 [1.38836492s] Jan 22 14:07:31.432: INFO: Created: latency-svc-rjb8f Jan 22 14:07:31.500: INFO: Got endpoints: latency-svc-rjb8f [1.26961382s] Jan 22 14:07:31.512: INFO: Created: latency-svc-sngzg Jan 22 14:07:31.523: INFO: Got endpoints: latency-svc-sngzg [1.253714757s] Jan 22 14:07:31.571: INFO: Created: latency-svc-648fb Jan 22 14:07:31.587: INFO: Got endpoints: latency-svc-648fb [1.188913268s] Jan 22 14:07:31.712: INFO: Created: latency-svc-dwxnz Jan 22 14:07:31.746: INFO: Got endpoints: latency-svc-dwxnz [1.270726403s] Jan 22 14:07:31.801: INFO: Created: latency-svc-q2hd4 Jan 22 14:07:31.873: INFO: Got endpoints: latency-svc-q2hd4 [1.364062738s] Jan 22 14:07:31.886: INFO: Created: latency-svc-vc89x Jan 22 14:07:31.899: INFO: Got endpoints: latency-svc-vc89x [1.345660072s] Jan 22 14:07:31.954: INFO: Created: latency-svc-j5cf5 Jan 22 14:07:32.020: INFO: Got endpoints: latency-svc-j5cf5 [1.361071547s] Jan 22 14:07:32.072: INFO: Created: latency-svc-jbqrh Jan 22 14:07:32.091: INFO: Got endpoints: latency-svc-jbqrh [1.31642157s] Jan 22 14:07:32.239: INFO: Created: latency-svc-csltn Jan 22 14:07:32.262: INFO: Got endpoints: latency-svc-csltn [1.442223798s] Jan 22 14:07:32.314: INFO: Created: latency-svc-tqkqv Jan 22 14:07:32.468: INFO: Got endpoints: latency-svc-tqkqv [1.505983383s] Jan 22 14:07:32.495: INFO: Created: latency-svc-gmd9q Jan 22 14:07:32.505: INFO: Got endpoints: latency-svc-gmd9q [1.49257706s] Jan 22 14:07:32.535: INFO: Created: latency-svc-czq9v Jan 22 14:07:32.550: INFO: Got endpoints: latency-svc-czq9v [1.368637911s] Jan 22 14:07:33.278: INFO: Created: latency-svc-5mgxf Jan 22 14:07:33.294: INFO: Got endpoints: latency-svc-5mgxf [2.065422486s] Jan 22 14:07:33.434: INFO: Created: latency-svc-4xk5g Jan 22 14:07:33.439: INFO: Got endpoints: latency-svc-4xk5g [2.089371636s] Jan 22 14:07:33.485: INFO: Created: latency-svc-hmztr Jan 22 14:07:33.499: INFO: Got endpoints: latency-svc-hmztr [2.110642043s] Jan 22 14:07:33.660: INFO: Created: latency-svc-8jwdx Jan 22 14:07:33.666: INFO: Got endpoints: latency-svc-8jwdx [2.165986186s] Jan 22 14:07:33.711: INFO: Created: latency-svc-5894t Jan 22 14:07:33.719: INFO: Got endpoints: latency-svc-5894t [2.196045954s] Jan 22 14:07:33.836: INFO: Created: latency-svc-2llqh Jan 22 14:07:33.886: INFO: Got endpoints: latency-svc-2llqh [2.298503658s] Jan 22 14:07:33.893: INFO: Created: latency-svc-j9dsd Jan 22 14:07:34.045: INFO: Got endpoints: latency-svc-j9dsd [2.298585089s] Jan 22 14:07:34.082: INFO: Created: latency-svc-t8cfr Jan 22 14:07:34.095: INFO: Got endpoints: latency-svc-t8cfr [2.221655973s] Jan 22 14:07:34.241: INFO: Created: latency-svc-4d2lc Jan 22 14:07:34.300: INFO: Got endpoints: latency-svc-4d2lc [2.399933868s] Jan 22 14:07:34.311: INFO: Created: latency-svc-t8hvd Jan 22 14:07:34.422: INFO: Created: latency-svc-k7xp6 Jan 22 14:07:34.423: INFO: Got endpoints: latency-svc-t8hvd [2.402765887s] Jan 22 14:07:34.431: INFO: Got endpoints: latency-svc-k7xp6 [2.339336535s] Jan 22 14:07:34.489: INFO: Created: latency-svc-dlm7f Jan 22 14:07:34.497: INFO: Got endpoints: latency-svc-dlm7f [2.235109561s] Jan 22 14:07:34.629: INFO: Created: latency-svc-cl98q Jan 22 14:07:34.635: INFO: Got endpoints: latency-svc-cl98q [2.166497722s] Jan 22 14:07:34.685: INFO: Created: latency-svc-rf968 Jan 22 14:07:34.761: INFO: Got endpoints: latency-svc-rf968 [2.25615118s] Jan 22 14:07:34.761: INFO: Latencies: [174.743474ms 175.417966ms 309.627443ms 500.606426ms 532.762184ms 700.574001ms 764.503925ms 892.145566ms 957.128146ms 1.064914961s 1.116105799s 1.188913268s 1.226319543s 1.232953434s 1.233328376s 1.236554915s 1.236558253s 1.240766181s 1.241858559s 1.245928806s 1.252937533s 1.253714757s 1.255581314s 1.259396565s 1.263835461s 1.26961382s 1.270231413s 1.270726403s 1.273690614s 1.2737596s 1.276112674s 1.280755942s 1.285521682s 1.289657005s 1.292385368s 1.293118681s 1.296448312s 1.298507227s 1.299161821s 1.305602928s 1.305610524s 1.31000896s 1.313131313s 1.31642157s 1.318706124s 1.320150522s 1.32277056s 1.324972554s 1.339163334s 1.344432926s 1.345660072s 1.348792487s 1.35967541s 1.360419826s 1.361071547s 1.361658501s 1.364062738s 1.368637911s 1.370230609s 1.376918628s 1.378074929s 1.385295097s 1.385817523s 1.386585862s 1.38836492s 1.403486043s 1.404606631s 1.404817417s 1.407131453s 1.427533632s 1.428094665s 1.428226511s 1.429822825s 1.431350559s 1.433223618s 1.434576421s 1.437950631s 1.440477067s 1.440568666s 1.440796419s 1.441469854s 1.442223798s 1.445487918s 1.447111097s 1.454781516s 1.464858246s 1.471385421s 1.472157655s 1.474619707s 1.475381244s 1.47660805s 1.480981086s 1.485556066s 1.490398795s 1.491876241s 1.49257706s 1.494793123s 1.496903628s 1.497764725s 1.505631071s 1.505983383s 1.510939569s 1.51428197s 1.516413022s 1.518210782s 1.519990771s 1.523525592s 1.528150477s 1.530713981s 1.530852546s 1.530895652s 1.534878976s 1.538844769s 1.539210214s 1.546876811s 1.550006415s 1.553026542s 1.554457872s 1.558656375s 1.566652795s 1.575386021s 1.577428288s 1.578260998s 1.580290704s 1.58236084s 1.587129305s 1.60578811s 1.609478667s 1.614050766s 1.629438644s 1.635038164s 1.635925514s 1.636112908s 1.63991641s 1.643385851s 1.644114202s 1.647732727s 1.649562469s 1.657439812s 1.658833277s 1.661836776s 1.664544814s 1.669439822s 1.6808207s 1.683406993s 1.690015582s 1.691197923s 1.693846824s 1.694450206s 1.698759374s 1.701032847s 1.701345034s 1.70804874s 1.711172217s 1.717796843s 1.719573683s 1.728045963s 1.728335911s 1.730328551s 1.741250802s 1.744568606s 1.753068811s 1.753140588s 1.753522194s 1.758109548s 1.760771093s 1.782314021s 1.783544019s 1.790761908s 1.793746567s 1.810788758s 1.810816152s 1.812256331s 1.813358292s 1.820129627s 1.820834718s 1.829618139s 1.838347329s 1.839097411s 1.845953304s 1.850104375s 1.857568661s 1.864703543s 1.931573356s 1.939172274s 1.968800669s 2.065422486s 2.089371636s 2.110642043s 2.165986186s 2.166497722s 2.196045954s 2.221655973s 2.235109561s 2.25615118s 2.298503658s 2.298585089s 2.339336535s 2.399933868s 2.402765887s] Jan 22 14:07:34.762: INFO: 50 %ile: 1.505983383s Jan 22 14:07:34.762: INFO: 90 %ile: 1.850104375s Jan 22 14:07:34.762: INFO: 99 %ile: 2.399933868s Jan 22 14:07:34.762: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:07:34.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6210" for this suite. Jan 22 14:08:16.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:08:16.922: INFO: namespace svc-latency-6210 deletion completed in 42.155796549s • [SLOW TEST:72.673 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:08:16.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 14:08:17.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 22 14:08:17.169: INFO: stderr: "" Jan 22 14:08:17.170: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:08:17.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7430" for this suite. Jan 22 14:08:23.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:08:23.470: INFO: namespace kubectl-7430 deletion completed in 6.287588103s • [SLOW TEST:6.548 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:08:23.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-31c06ccd-2084-4366-9697-03b098197b87 STEP: Creating configMap with name cm-test-opt-upd-8c9c359d-9288-42ff-99ec-ac0134d0563a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-31c06ccd-2084-4366-9697-03b098197b87 STEP: Updating configmap cm-test-opt-upd-8c9c359d-9288-42ff-99ec-ac0134d0563a STEP: Creating configMap with name cm-test-opt-create-b8c2fca0-4ab4-4e57-ad7e-d5652711d91e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:08:37.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7007" for this suite. Jan 22 14:09:02.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:09:02.093: INFO: namespace configmap-7007 deletion completed in 24.10933703s • [SLOW TEST:38.622 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:09:02.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 14:09:02.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-619' Jan 22 14:09:02.316: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 22 14:09:02.316: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jan 22 14:09:04.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-619' Jan 22 14:09:04.489: INFO: stderr: "" Jan 22 14:09:04.489: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:09:04.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-619" for this suite. Jan 22 14:09:10.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:09:10.877: INFO: namespace kubectl-619 deletion completed in 6.375346277s • [SLOW TEST:8.784 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:09:10.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-4a877625-b130-4a3a-9ead-1d8373e4dcb0 STEP: Creating a pod to test consume secrets Jan 22 14:09:11.032: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4" in namespace "projected-5674" to be "success or failure" Jan 22 14:09:11.042: INFO: Pod "pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.458945ms Jan 22 14:09:13.050: INFO: Pod "pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017818192s Jan 22 14:09:15.057: INFO: Pod "pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025264478s Jan 22 14:09:17.065: INFO: Pod "pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033101823s Jan 22 14:09:19.077: INFO: Pod "pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044638569s Jan 22 14:09:21.083: INFO: Pod "pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05114147s STEP: Saw pod success Jan 22 14:09:21.083: INFO: Pod "pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4" satisfied condition "success or failure" Jan 22 14:09:21.087: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4 container projected-secret-volume-test: STEP: delete the pod Jan 22 14:09:21.403: INFO: Waiting for pod pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4 to disappear Jan 22 14:09:21.410: INFO: Pod pod-projected-secrets-3d4bc4c9-9647-4a8c-97e4-81ccce28deb4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:09:21.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5674" for this suite. Jan 22 14:09:27.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:09:27.645: INFO: namespace projected-5674 deletion completed in 6.226447979s • [SLOW TEST:16.766 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:09:27.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 14:09:27.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d" in namespace "projected-5797" to be "success or failure" Jan 22 14:09:27.790: INFO: Pod "downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.45846ms Jan 22 14:09:29.802: INFO: Pod "downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024384955s Jan 22 14:09:31.809: INFO: Pod "downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031409368s Jan 22 14:09:33.827: INFO: Pod "downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049441252s Jan 22 14:09:35.845: INFO: Pod "downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066833702s Jan 22 14:09:37.858: INFO: Pod "downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079783176s STEP: Saw pod success Jan 22 14:09:37.858: INFO: Pod "downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d" satisfied condition "success or failure" Jan 22 14:09:37.867: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d container client-container: STEP: delete the pod Jan 22 14:09:37.965: INFO: Waiting for pod downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d to disappear Jan 22 14:09:37.974: INFO: Pod downwardapi-volume-03a37687-584e-4b9a-bbb5-b40da188c88d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:09:37.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5797" for this suite. Jan 22 14:09:44.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:09:44.620: INFO: namespace projected-5797 deletion completed in 6.637738091s • [SLOW TEST:16.975 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:09:44.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 22 14:09:52.816: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:09:52.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6040" for this suite. Jan 22 14:09:58.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:09:59.024: INFO: namespace container-runtime-6040 deletion completed in 6.142582627s • [SLOW TEST:14.404 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:09:59.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9053 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 22 14:09:59.253: INFO: Found 0 stateful pods, waiting for 3 Jan 22 14:10:09.500: INFO: Found 2 stateful pods, waiting for 3 Jan 22 14:10:19.262: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 14:10:19.262: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 14:10:19.262: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 22 14:10:29.261: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 14:10:29.261: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 14:10:29.261: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 22 14:10:29.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9053 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 14:10:29.816: INFO: stderr: "I0122 14:10:29.522334 2584 log.go:172] (0xc00081a420) (0xc00066e640) Create stream\nI0122 14:10:29.522717 2584 log.go:172] (0xc00081a420) (0xc00066e640) Stream added, broadcasting: 1\nI0122 14:10:29.526983 2584 log.go:172] (0xc00081a420) Reply frame received for 1\nI0122 14:10:29.527028 2584 log.go:172] (0xc00081a420) (0xc000652320) Create stream\nI0122 14:10:29.527046 2584 log.go:172] (0xc00081a420) (0xc000652320) Stream added, broadcasting: 3\nI0122 14:10:29.528581 2584 log.go:172] (0xc00081a420) Reply frame received for 3\nI0122 14:10:29.528609 2584 log.go:172] (0xc00081a420) (0xc00066e6e0) Create stream\nI0122 14:10:29.528622 2584 log.go:172] (0xc00081a420) (0xc00066e6e0) Stream added, broadcasting: 5\nI0122 14:10:29.530333 2584 log.go:172] (0xc00081a420) Reply frame received for 5\nI0122 14:10:29.692626 2584 log.go:172] (0xc00081a420) Data frame received for 5\nI0122 14:10:29.692705 2584 log.go:172] (0xc00066e6e0) (5) Data frame handling\nI0122 14:10:29.692729 2584 log.go:172] (0xc00066e6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 14:10:29.728760 2584 log.go:172] (0xc00081a420) Data frame received for 3\nI0122 14:10:29.728799 2584 log.go:172] (0xc000652320) (3) Data frame handling\nI0122 14:10:29.728813 2584 log.go:172] (0xc000652320) (3) Data frame sent\nI0122 14:10:29.812882 2584 log.go:172] (0xc00081a420) Data frame received for 1\nI0122 14:10:29.812924 2584 log.go:172] (0xc00081a420) (0xc000652320) Stream removed, broadcasting: 3\nI0122 14:10:29.812948 2584 log.go:172] (0xc00066e640) (1) Data frame handling\nI0122 14:10:29.812956 2584 log.go:172] (0xc00066e640) (1) Data frame sent\nI0122 14:10:29.812963 2584 log.go:172] (0xc00081a420) (0xc00066e640) Stream removed, broadcasting: 1\nI0122 14:10:29.813035 2584 log.go:172] (0xc00081a420) (0xc00066e6e0) Stream removed, broadcasting: 5\nI0122 14:10:29.813066 2584 log.go:172] (0xc00081a420) Go away received\nI0122 14:10:29.813209 2584 log.go:172] (0xc00081a420) (0xc00066e640) Stream removed, broadcasting: 1\nI0122 14:10:29.813267 2584 log.go:172] (0xc00081a420) (0xc000652320) Stream removed, broadcasting: 3\nI0122 14:10:29.813290 2584 log.go:172] (0xc00081a420) (0xc00066e6e0) Stream removed, broadcasting: 5\n" Jan 22 14:10:29.816: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 14:10:29.816: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 22 14:10:39.871: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 22 14:10:51.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9053 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:10:52.033: INFO: stderr: "I0122 14:10:51.825986 2601 log.go:172] (0xc0001160b0) (0xc00073e0a0) Create stream\nI0122 14:10:51.826191 2601 log.go:172] (0xc0001160b0) (0xc00073e0a0) Stream added, broadcasting: 1\nI0122 14:10:51.829138 2601 log.go:172] (0xc0001160b0) Reply frame received for 1\nI0122 14:10:51.829158 2601 log.go:172] (0xc0001160b0) (0xc00073e140) Create stream\nI0122 14:10:51.829169 2601 log.go:172] (0xc0001160b0) (0xc00073e140) Stream added, broadcasting: 3\nI0122 14:10:51.829869 2601 log.go:172] (0xc0001160b0) Reply frame received for 3\nI0122 14:10:51.829888 2601 log.go:172] (0xc0001160b0) (0xc00072a000) Create stream\nI0122 14:10:51.829897 2601 log.go:172] (0xc0001160b0) (0xc00072a000) Stream added, broadcasting: 5\nI0122 14:10:51.830840 2601 log.go:172] (0xc0001160b0) Reply frame received for 5\nI0122 14:10:51.928751 2601 log.go:172] (0xc0001160b0) Data frame received for 5\nI0122 14:10:51.928943 2601 log.go:172] (0xc00072a000) (5) Data frame handling\nI0122 14:10:51.928962 2601 log.go:172] (0xc00072a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0122 14:10:51.929217 2601 log.go:172] (0xc0001160b0) Data frame received for 3\nI0122 14:10:51.929288 2601 log.go:172] (0xc00073e140) (3) Data frame handling\nI0122 14:10:51.929327 2601 log.go:172] (0xc00073e140) (3) Data frame sent\nI0122 14:10:52.029913 2601 log.go:172] (0xc0001160b0) (0xc00073e140) Stream removed, broadcasting: 3\nI0122 14:10:52.029977 2601 log.go:172] (0xc0001160b0) Data frame received for 1\nI0122 14:10:52.029987 2601 log.go:172] (0xc0001160b0) (0xc00072a000) Stream removed, broadcasting: 5\nI0122 14:10:52.029997 2601 log.go:172] (0xc00073e0a0) (1) Data frame handling\nI0122 14:10:52.030036 2601 log.go:172] (0xc00073e0a0) (1) Data frame sent\nI0122 14:10:52.030045 2601 log.go:172] (0xc0001160b0) (0xc00073e0a0) Stream removed, broadcasting: 1\nI0122 14:10:52.030052 2601 log.go:172] (0xc0001160b0) Go away received\nI0122 14:10:52.030396 2601 log.go:172] (0xc0001160b0) (0xc00073e0a0) Stream removed, broadcasting: 1\nI0122 14:10:52.030413 2601 log.go:172] (0xc0001160b0) (0xc00073e140) Stream removed, broadcasting: 3\nI0122 14:10:52.030421 2601 log.go:172] (0xc0001160b0) (0xc00072a000) Stream removed, broadcasting: 5\n" Jan 22 14:10:52.033: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 14:10:52.033: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 14:11:02.073: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update Jan 22 14:11:02.073: INFO: Waiting for Pod statefulset-9053/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 14:11:02.073: INFO: Waiting for Pod statefulset-9053/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 14:11:02.073: INFO: Waiting for Pod statefulset-9053/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 14:11:12.093: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update Jan 22 14:11:12.093: INFO: Waiting for Pod statefulset-9053/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 14:11:12.093: INFO: Waiting for Pod statefulset-9053/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 14:11:22.087: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update Jan 22 14:11:22.087: INFO: Waiting for Pod statefulset-9053/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 14:11:32.086: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update Jan 22 14:11:32.086: INFO: Waiting for Pod statefulset-9053/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 14:11:42.088: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update STEP: Rolling back to a previous revision Jan 22 14:11:52.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9053 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 14:11:52.559: INFO: stderr: "I0122 14:11:52.273764 2617 log.go:172] (0xc0008c82c0) (0xc00062c780) Create stream\nI0122 14:11:52.273930 2617 log.go:172] (0xc0008c82c0) (0xc00062c780) Stream added, broadcasting: 1\nI0122 14:11:52.277259 2617 log.go:172] (0xc0008c82c0) Reply frame received for 1\nI0122 14:11:52.277294 2617 log.go:172] (0xc0008c82c0) (0xc0008ea000) Create stream\nI0122 14:11:52.277305 2617 log.go:172] (0xc0008c82c0) (0xc0008ea000) Stream added, broadcasting: 3\nI0122 14:11:52.278330 2617 log.go:172] (0xc0008c82c0) Reply frame received for 3\nI0122 14:11:52.278360 2617 log.go:172] (0xc0008c82c0) (0xc000826000) Create stream\nI0122 14:11:52.278379 2617 log.go:172] (0xc0008c82c0) (0xc000826000) Stream added, broadcasting: 5\nI0122 14:11:52.279341 2617 log.go:172] (0xc0008c82c0) Reply frame received for 5\nI0122 14:11:52.392065 2617 log.go:172] (0xc0008c82c0) Data frame received for 5\nI0122 14:11:52.392159 2617 log.go:172] (0xc000826000) (5) Data frame handling\nI0122 14:11:52.392183 2617 log.go:172] (0xc000826000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 14:11:52.425397 2617 log.go:172] (0xc0008c82c0) Data frame received for 3\nI0122 14:11:52.425490 2617 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0122 14:11:52.425520 2617 log.go:172] (0xc0008ea000) (3) Data frame sent\nI0122 14:11:52.545606 2617 log.go:172] (0xc0008c82c0) Data frame received for 1\nI0122 14:11:52.547080 2617 log.go:172] (0xc0008c82c0) (0xc0008ea000) Stream removed, broadcasting: 3\nI0122 14:11:52.547434 2617 log.go:172] (0xc00062c780) (1) Data frame handling\nI0122 14:11:52.547662 2617 log.go:172] (0xc00062c780) (1) Data frame sent\nI0122 14:11:52.547832 2617 log.go:172] (0xc0008c82c0) (0xc000826000) Stream removed, broadcasting: 5\nI0122 14:11:52.547938 2617 log.go:172] (0xc0008c82c0) (0xc00062c780) Stream removed, broadcasting: 1\nI0122 14:11:52.548027 2617 log.go:172] (0xc0008c82c0) Go away received\nI0122 14:11:52.549978 2617 log.go:172] (0xc0008c82c0) (0xc00062c780) Stream removed, broadcasting: 1\nI0122 14:11:52.549998 2617 log.go:172] (0xc0008c82c0) (0xc0008ea000) Stream removed, broadcasting: 3\nI0122 14:11:52.550012 2617 log.go:172] (0xc0008c82c0) (0xc000826000) Stream removed, broadcasting: 5\n" Jan 22 14:11:52.559: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 14:11:52.559: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 14:12:02.674: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 22 14:12:12.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9053 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:12:13.039: INFO: stderr: "I0122 14:12:12.891505 2637 log.go:172] (0xc00093e0b0) (0xc0008c0640) Create stream\nI0122 14:12:12.891714 2637 log.go:172] (0xc00093e0b0) (0xc0008c0640) Stream added, broadcasting: 1\nI0122 14:12:12.896598 2637 log.go:172] (0xc00093e0b0) Reply frame received for 1\nI0122 14:12:12.896658 2637 log.go:172] (0xc00093e0b0) (0xc0008c06e0) Create stream\nI0122 14:12:12.896669 2637 log.go:172] (0xc00093e0b0) (0xc0008c06e0) Stream added, broadcasting: 3\nI0122 14:12:12.898652 2637 log.go:172] (0xc00093e0b0) Reply frame received for 3\nI0122 14:12:12.898681 2637 log.go:172] (0xc00093e0b0) (0xc000a28000) Create stream\nI0122 14:12:12.898692 2637 log.go:172] (0xc00093e0b0) (0xc000a28000) Stream added, broadcasting: 5\nI0122 14:12:12.899808 2637 log.go:172] (0xc00093e0b0) Reply frame received for 5\nI0122 14:12:12.973199 2637 log.go:172] (0xc00093e0b0) Data frame received for 5\nI0122 14:12:12.973333 2637 log.go:172] (0xc000a28000) (5) Data frame handling\nI0122 14:12:12.973370 2637 log.go:172] (0xc000a28000) (5) Data frame sent\nI0122 14:12:12.973515 2637 log.go:172] (0xc00093e0b0) Data frame received for 3\nI0122 14:12:12.973551 2637 log.go:172] (0xc0008c06e0) (3) Data frame handling\nI0122 14:12:12.973561 2637 log.go:172] (0xc0008c06e0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0122 14:12:13.035459 2637 log.go:172] (0xc00093e0b0) (0xc0008c06e0) Stream removed, broadcasting: 3\nI0122 14:12:13.035521 2637 log.go:172] (0xc00093e0b0) Data frame received for 1\nI0122 14:12:13.035532 2637 log.go:172] (0xc0008c0640) (1) Data frame handling\nI0122 14:12:13.035540 2637 log.go:172] (0xc0008c0640) (1) Data frame sent\nI0122 14:12:13.035547 2637 log.go:172] (0xc00093e0b0) (0xc0008c0640) Stream removed, broadcasting: 1\nI0122 14:12:13.035904 2637 log.go:172] (0xc00093e0b0) (0xc000a28000) Stream removed, broadcasting: 5\nI0122 14:12:13.035924 2637 log.go:172] (0xc00093e0b0) (0xc0008c0640) Stream removed, broadcasting: 1\nI0122 14:12:13.035930 2637 log.go:172] (0xc00093e0b0) (0xc0008c06e0) Stream removed, broadcasting: 3\nI0122 14:12:13.035942 2637 log.go:172] (0xc00093e0b0) (0xc000a28000) Stream removed, broadcasting: 5\n" Jan 22 14:12:13.039: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 14:12:13.039: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 14:12:23.090: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update Jan 22 14:12:23.090: INFO: Waiting for Pod statefulset-9053/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 14:12:23.090: INFO: Waiting for Pod statefulset-9053/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 14:12:33.107: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update Jan 22 14:12:33.108: INFO: Waiting for Pod statefulset-9053/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 14:12:33.108: INFO: Waiting for Pod statefulset-9053/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 14:12:43.479: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update Jan 22 14:12:43.479: INFO: Waiting for Pod statefulset-9053/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 14:12:53.100: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update Jan 22 14:12:53.100: INFO: Waiting for Pod statefulset-9053/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 14:13:03.142: INFO: Waiting for StatefulSet statefulset-9053/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 22 14:13:13.105: INFO: Deleting all statefulset in ns statefulset-9053 Jan 22 14:13:13.108: INFO: Scaling statefulset ss2 to 0 Jan 22 14:13:43.145: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 14:13:43.150: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:13:43.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9053" for this suite. Jan 22 14:13:51.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:13:51.375: INFO: namespace statefulset-9053 deletion completed in 8.190583335s • [SLOW TEST:232.350 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:13:51.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 22 14:14:09.553: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:09.566: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 14:14:11.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:11.579: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 14:14:13.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:13.572: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 14:14:15.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:15.572: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 14:14:17.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:17.573: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 14:14:19.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:19.576: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 14:14:21.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:21.575: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 14:14:23.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:23.573: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 14:14:25.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:25.575: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 14:14:27.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 14:14:27.572: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:14:27.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9821" for this suite. Jan 22 14:14:49.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:14:49.768: INFO: namespace container-lifecycle-hook-9821 deletion completed in 22.147593888s • [SLOW TEST:58.392 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:14:49.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 22 14:15:06.010: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 22 14:15:06.029: INFO: Pod pod-with-poststart-http-hook still exists Jan 22 14:15:08.029: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 22 14:15:08.077: INFO: Pod pod-with-poststart-http-hook still exists Jan 22 14:15:10.029: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 22 14:15:10.035: INFO: Pod pod-with-poststart-http-hook still exists Jan 22 14:15:12.029: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 22 14:15:12.038: INFO: Pod pod-with-poststart-http-hook still exists Jan 22 14:15:14.029: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 22 14:15:14.125: INFO: Pod pod-with-poststart-http-hook still exists Jan 22 14:15:16.029: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 22 14:15:16.036: INFO: Pod pod-with-poststart-http-hook still exists Jan 22 14:15:18.029: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 22 14:15:18.039: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:15:18.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2010" for this suite. Jan 22 14:15:40.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:15:40.178: INFO: namespace container-lifecycle-hook-2010 deletion completed in 22.122029592s • [SLOW TEST:50.410 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:15:40.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2190 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 22 14:15:40.269: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 22 14:16:18.443: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2190 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 14:16:18.443: INFO: >>> kubeConfig: /root/.kube/config I0122 14:16:18.527186 9 log.go:172] (0xc00158ed10) (0xc001c14f00) Create stream I0122 14:16:18.527236 9 log.go:172] (0xc00158ed10) (0xc001c14f00) Stream added, broadcasting: 1 I0122 14:16:18.536665 9 log.go:172] (0xc00158ed10) Reply frame received for 1 I0122 14:16:18.536707 9 log.go:172] (0xc00158ed10) (0xc001c703c0) Create stream I0122 14:16:18.536714 9 log.go:172] (0xc00158ed10) (0xc001c703c0) Stream added, broadcasting: 3 I0122 14:16:18.538388 9 log.go:172] (0xc00158ed10) Reply frame received for 3 I0122 14:16:18.538416 9 log.go:172] (0xc00158ed10) (0xc001eaab40) Create stream I0122 14:16:18.538422 9 log.go:172] (0xc00158ed10) (0xc001eaab40) Stream added, broadcasting: 5 I0122 14:16:18.540782 9 log.go:172] (0xc00158ed10) Reply frame received for 5 I0122 14:16:19.738996 9 log.go:172] (0xc00158ed10) Data frame received for 3 I0122 14:16:19.739124 9 log.go:172] (0xc001c703c0) (3) Data frame handling I0122 14:16:19.739177 9 log.go:172] (0xc001c703c0) (3) Data frame sent I0122 14:16:19.934413 9 log.go:172] (0xc00158ed10) (0xc001c703c0) Stream removed, broadcasting: 3 I0122 14:16:19.934626 9 log.go:172] (0xc00158ed10) Data frame received for 1 I0122 14:16:19.934644 9 log.go:172] (0xc001c14f00) (1) Data frame handling I0122 14:16:19.934664 9 log.go:172] (0xc001c14f00) (1) Data frame sent I0122 14:16:19.934841 9 log.go:172] (0xc00158ed10) (0xc001c14f00) Stream removed, broadcasting: 1 I0122 14:16:19.935076 9 log.go:172] (0xc00158ed10) (0xc001eaab40) Stream removed, broadcasting: 5 I0122 14:16:19.935121 9 log.go:172] (0xc00158ed10) (0xc001c14f00) Stream removed, broadcasting: 1 I0122 14:16:19.935129 9 log.go:172] (0xc00158ed10) (0xc001c703c0) Stream removed, broadcasting: 3 I0122 14:16:19.935136 9 log.go:172] (0xc00158ed10) (0xc001eaab40) Stream removed, broadcasting: 5 I0122 14:16:19.935442 9 log.go:172] (0xc00158ed10) Go away received Jan 22 14:16:19.935: INFO: Found all expected endpoints: [netserver-0] Jan 22 14:16:19.951: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2190 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 14:16:19.951: INFO: >>> kubeConfig: /root/.kube/config I0122 14:16:20.015917 9 log.go:172] (0xc0016de160) (0xc00286dd60) Create stream I0122 14:16:20.015963 9 log.go:172] (0xc0016de160) (0xc00286dd60) Stream added, broadcasting: 1 I0122 14:16:20.021155 9 log.go:172] (0xc0016de160) Reply frame received for 1 I0122 14:16:20.021205 9 log.go:172] (0xc0016de160) (0xc001c70640) Create stream I0122 14:16:20.021225 9 log.go:172] (0xc0016de160) (0xc001c70640) Stream added, broadcasting: 3 I0122 14:16:20.024367 9 log.go:172] (0xc0016de160) Reply frame received for 3 I0122 14:16:20.024394 9 log.go:172] (0xc0016de160) (0xc001eaabe0) Create stream I0122 14:16:20.024402 9 log.go:172] (0xc0016de160) (0xc001eaabe0) Stream added, broadcasting: 5 I0122 14:16:20.026223 9 log.go:172] (0xc0016de160) Reply frame received for 5 I0122 14:16:21.142639 9 log.go:172] (0xc0016de160) Data frame received for 3 I0122 14:16:21.142742 9 log.go:172] (0xc001c70640) (3) Data frame handling I0122 14:16:21.142765 9 log.go:172] (0xc001c70640) (3) Data frame sent I0122 14:16:21.272574 9 log.go:172] (0xc0016de160) (0xc001c70640) Stream removed, broadcasting: 3 I0122 14:16:21.272735 9 log.go:172] (0xc0016de160) (0xc001eaabe0) Stream removed, broadcasting: 5 I0122 14:16:21.272787 9 log.go:172] (0xc0016de160) Data frame received for 1 I0122 14:16:21.272832 9 log.go:172] (0xc00286dd60) (1) Data frame handling I0122 14:16:21.272855 9 log.go:172] (0xc00286dd60) (1) Data frame sent I0122 14:16:21.272868 9 log.go:172] (0xc0016de160) (0xc00286dd60) Stream removed, broadcasting: 1 I0122 14:16:21.272896 9 log.go:172] (0xc0016de160) Go away received I0122 14:16:21.273642 9 log.go:172] (0xc0016de160) (0xc00286dd60) Stream removed, broadcasting: 1 I0122 14:16:21.273697 9 log.go:172] (0xc0016de160) (0xc001c70640) Stream removed, broadcasting: 3 I0122 14:16:21.273723 9 log.go:172] (0xc0016de160) (0xc001eaabe0) Stream removed, broadcasting: 5 Jan 22 14:16:21.273: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:16:21.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2190" for this suite. Jan 22 14:16:43.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:16:43.428: INFO: namespace pod-network-test-2190 deletion completed in 22.144772262s • [SLOW TEST:63.250 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:16:43.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b Jan 22 14:16:43.646: INFO: Pod name my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b: Found 0 pods out of 1 Jan 22 14:16:48.708: INFO: Pod name my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b: Found 1 pods out of 1 Jan 22 14:16:48.708: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b" are running Jan 22 14:16:52.724: INFO: Pod "my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b-vwzlx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 14:16:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 14:16:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 14:16:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 14:16:43 +0000 UTC Reason: Message:}]) Jan 22 14:16:52.724: INFO: Trying to dial the pod Jan 22 14:16:57.787: INFO: Controller my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b: Got expected result from replica 1 [my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b-vwzlx]: "my-hostname-basic-d8a39f76-d6be-4399-9158-285daf20d67b-vwzlx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:16:57.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3664" for this suite. Jan 22 14:17:03.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:17:04.014: INFO: namespace replication-controller-3664 deletion completed in 6.218107437s • [SLOW TEST:20.586 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:17:04.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 22 14:17:14.278: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:17:14.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6331" for this suite. Jan 22 14:17:20.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:17:20.529: INFO: namespace container-runtime-6331 deletion completed in 6.170608028s • [SLOW TEST:16.515 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:17:20.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 22 14:17:20.624: INFO: Waiting up to 5m0s for pod "var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26" in namespace "var-expansion-902" to be "success or failure" Jan 22 14:17:20.635: INFO: Pod "var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 10.914874ms Jan 22 14:17:22.643: INFO: Pod "var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018280733s Jan 22 14:17:24.711: INFO: Pod "var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086487061s Jan 22 14:17:26.727: INFO: Pod "var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102300541s Jan 22 14:17:28.731: INFO: Pod "var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106652247s Jan 22 14:17:30.738: INFO: Pod "var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113285386s STEP: Saw pod success Jan 22 14:17:30.738: INFO: Pod "var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26" satisfied condition "success or failure" Jan 22 14:17:30.743: INFO: Trying to get logs from node iruya-node pod var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26 container dapi-container: STEP: delete the pod Jan 22 14:17:30.858: INFO: Waiting for pod var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26 to disappear Jan 22 14:17:30.868: INFO: Pod var-expansion-9a1b29e4-e7ff-41bf-bcdc-0dc792d0ba26 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:17:30.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-902" for this suite. Jan 22 14:17:36.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:17:37.017: INFO: namespace var-expansion-902 deletion completed in 6.141882601s • [SLOW TEST:16.487 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:17:37.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 14:17:37.083: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 22 14:17:37.095: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 22 14:17:42.107: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 22 14:17:46.126: INFO: Creating deployment "test-rolling-update-deployment" Jan 22 14:17:46.136: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 22 14:17:46.150: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 22 14:17:48.162: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 22 14:17:48.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 14:17:50.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 14:17:52.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715299466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 14:17:54.174: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 22 14:17:54.189: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2100,SelfLink:/apis/apps/v1/namespaces/deployment-2100/deployments/test-rolling-update-deployment,UID:47ba0b60-17ca-4c96-9125-cfed158b056e,ResourceVersion:21443010,Generation:1,CreationTimestamp:2020-01-22 14:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-22 14:17:46 +0000 UTC 2020-01-22 14:17:46 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-22 14:17:53 +0000 UTC 2020-01-22 14:17:46 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 22 14:17:54.194: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2100,SelfLink:/apis/apps/v1/namespaces/deployment-2100/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:608261e7-0489-42ad-a8a5-3447ec394932,ResourceVersion:21442999,Generation:1,CreationTimestamp:2020-01-22 14:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 47ba0b60-17ca-4c96-9125-cfed158b056e 0xc000532097 0xc000532098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 22 14:17:54.194: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 22 14:17:54.195: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2100,SelfLink:/apis/apps/v1/namespaces/deployment-2100/replicasets/test-rolling-update-controller,UID:cfa1597a-afae-4515-9138-75f5f26d687a,ResourceVersion:21443009,Generation:2,CreationTimestamp:2020-01-22 14:17:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 47ba0b60-17ca-4c96-9125-cfed158b056e 0xc002cc7ef7 0xc002cc7ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 22 14:17:54.199: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-tp8p9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-tp8p9,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2100,SelfLink:/api/v1/namespaces/deployment-2100/pods/test-rolling-update-deployment-79f6b9d75c-tp8p9,UID:dbbd8291-9d74-45d4-8ad8-57afbe598b6b,ResourceVersion:21442998,Generation:0,CreationTimestamp:2020-01-22 14:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 608261e7-0489-42ad-a8a5-3447ec394932 0xc0008f6077 0xc0008f6078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jq452 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jq452,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-jq452 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0008f6130} {node.kubernetes.io/unreachable Exists NoExecute 0xc0008f6150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:17:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:17:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:17:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:17:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-22 14:17:46 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-22 14:17:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://b2753690510225d6a821024a96b1e5594de94f986f063fa82e8b1f68d7c50f68}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:17:54.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2100" for this suite. Jan 22 14:18:00.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:18:00.360: INFO: namespace deployment-2100 deletion completed in 6.152781492s • [SLOW TEST:23.341 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:18:00.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-cf5de643-f86a-4241-962c-a95d895dacc1 STEP: Creating configMap with name cm-test-opt-upd-ffedb71a-f29c-48ad-8dd1-1b6d21733762 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cf5de643-f86a-4241-962c-a95d895dacc1 STEP: Updating configmap cm-test-opt-upd-ffedb71a-f29c-48ad-8dd1-1b6d21733762 STEP: Creating configMap with name cm-test-opt-create-3e991403-fcb4-40f8-82b2-aa66a93153f5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:18:16.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5610" for this suite. Jan 22 14:18:38.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:18:39.060: INFO: namespace projected-5610 deletion completed in 22.137201294s • [SLOW TEST:38.700 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:18:39.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 22 14:18:39.208: INFO: Waiting up to 5m0s for pod "pod-2d351a04-a652-490a-bf52-d6014d2a89d6" in namespace "emptydir-3430" to be "success or failure" Jan 22 14:18:39.227: INFO: Pod "pod-2d351a04-a652-490a-bf52-d6014d2a89d6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.751991ms Jan 22 14:18:41.234: INFO: Pod "pod-2d351a04-a652-490a-bf52-d6014d2a89d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026422201s Jan 22 14:18:43.243: INFO: Pod "pod-2d351a04-a652-490a-bf52-d6014d2a89d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035138252s Jan 22 14:18:45.255: INFO: Pod "pod-2d351a04-a652-490a-bf52-d6014d2a89d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047098785s Jan 22 14:18:47.261: INFO: Pod "pod-2d351a04-a652-490a-bf52-d6014d2a89d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052791585s Jan 22 14:18:49.269: INFO: Pod "pod-2d351a04-a652-490a-bf52-d6014d2a89d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061012031s STEP: Saw pod success Jan 22 14:18:49.269: INFO: Pod "pod-2d351a04-a652-490a-bf52-d6014d2a89d6" satisfied condition "success or failure" Jan 22 14:18:49.273: INFO: Trying to get logs from node iruya-node pod pod-2d351a04-a652-490a-bf52-d6014d2a89d6 container test-container: STEP: delete the pod Jan 22 14:18:49.626: INFO: Waiting for pod pod-2d351a04-a652-490a-bf52-d6014d2a89d6 to disappear Jan 22 14:18:49.645: INFO: Pod pod-2d351a04-a652-490a-bf52-d6014d2a89d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:18:49.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3430" for this suite. Jan 22 14:18:55.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:18:55.834: INFO: namespace emptydir-3430 deletion completed in 6.165729252s • [SLOW TEST:16.774 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:18:55.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-dab0f784-8e45-4e07-b4c4-b8e28c5aed01 in namespace container-probe-9376 Jan 22 14:19:06.046: INFO: Started pod liveness-dab0f784-8e45-4e07-b4c4-b8e28c5aed01 in namespace container-probe-9376 STEP: checking the pod's current state and verifying that restartCount is present Jan 22 14:19:06.050: INFO: Initial restart count of pod liveness-dab0f784-8e45-4e07-b4c4-b8e28c5aed01 is 0 Jan 22 14:19:26.141: INFO: Restart count of pod container-probe-9376/liveness-dab0f784-8e45-4e07-b4c4-b8e28c5aed01 is now 1 (20.090639463s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:19:26.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9376" for this suite. Jan 22 14:19:32.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:19:32.328: INFO: namespace container-probe-9376 deletion completed in 6.120395656s • [SLOW TEST:36.493 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:19:32.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9506 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 22 14:19:32.469: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 22 14:20:12.754: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9506 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 14:20:12.754: INFO: >>> kubeConfig: /root/.kube/config I0122 14:20:12.856872 9 log.go:172] (0xc000ee8790) (0xc002866fa0) Create stream I0122 14:20:12.856960 9 log.go:172] (0xc000ee8790) (0xc002866fa0) Stream added, broadcasting: 1 I0122 14:20:12.869830 9 log.go:172] (0xc000ee8790) Reply frame received for 1 I0122 14:20:12.869909 9 log.go:172] (0xc000ee8790) (0xc002867040) Create stream I0122 14:20:12.869916 9 log.go:172] (0xc000ee8790) (0xc002867040) Stream added, broadcasting: 3 I0122 14:20:12.872581 9 log.go:172] (0xc000ee8790) Reply frame received for 3 I0122 14:20:12.872648 9 log.go:172] (0xc000ee8790) (0xc000f0cb40) Create stream I0122 14:20:12.872660 9 log.go:172] (0xc000ee8790) (0xc000f0cb40) Stream added, broadcasting: 5 I0122 14:20:12.873949 9 log.go:172] (0xc000ee8790) Reply frame received for 5 I0122 14:20:13.000270 9 log.go:172] (0xc000ee8790) Data frame received for 3 I0122 14:20:13.000362 9 log.go:172] (0xc002867040) (3) Data frame handling I0122 14:20:13.000373 9 log.go:172] (0xc002867040) (3) Data frame sent I0122 14:20:13.157589 9 log.go:172] (0xc000ee8790) (0xc002867040) Stream removed, broadcasting: 3 I0122 14:20:13.157721 9 log.go:172] (0xc000ee8790) Data frame received for 1 I0122 14:20:13.157779 9 log.go:172] (0xc002866fa0) (1) Data frame handling I0122 14:20:13.157820 9 log.go:172] (0xc002866fa0) (1) Data frame sent I0122 14:20:13.157853 9 log.go:172] (0xc000ee8790) (0xc000f0cb40) Stream removed, broadcasting: 5 I0122 14:20:13.158022 9 log.go:172] (0xc000ee8790) (0xc002866fa0) Stream removed, broadcasting: 1 I0122 14:20:13.158061 9 log.go:172] (0xc000ee8790) Go away received I0122 14:20:13.158616 9 log.go:172] (0xc000ee8790) (0xc002866fa0) Stream removed, broadcasting: 1 I0122 14:20:13.158672 9 log.go:172] (0xc000ee8790) (0xc002867040) Stream removed, broadcasting: 3 I0122 14:20:13.158714 9 log.go:172] (0xc000ee8790) (0xc000f0cb40) Stream removed, broadcasting: 5 Jan 22 14:20:13.158: INFO: Found all expected endpoints: [netserver-0] Jan 22 14:20:13.166: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9506 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 14:20:13.166: INFO: >>> kubeConfig: /root/.kube/config I0122 14:20:13.239871 9 log.go:172] (0xc000994dc0) (0xc000b4c8c0) Create stream I0122 14:20:13.239941 9 log.go:172] (0xc000994dc0) (0xc000b4c8c0) Stream added, broadcasting: 1 I0122 14:20:13.246577 9 log.go:172] (0xc000994dc0) Reply frame received for 1 I0122 14:20:13.246616 9 log.go:172] (0xc000994dc0) (0xc0028670e0) Create stream I0122 14:20:13.246626 9 log.go:172] (0xc000994dc0) (0xc0028670e0) Stream added, broadcasting: 3 I0122 14:20:13.248519 9 log.go:172] (0xc000994dc0) Reply frame received for 3 I0122 14:20:13.248564 9 log.go:172] (0xc000994dc0) (0xc001c70640) Create stream I0122 14:20:13.248585 9 log.go:172] (0xc000994dc0) (0xc001c70640) Stream added, broadcasting: 5 I0122 14:20:13.250417 9 log.go:172] (0xc000994dc0) Reply frame received for 5 I0122 14:20:13.394146 9 log.go:172] (0xc000994dc0) Data frame received for 3 I0122 14:20:13.394277 9 log.go:172] (0xc0028670e0) (3) Data frame handling I0122 14:20:13.394318 9 log.go:172] (0xc0028670e0) (3) Data frame sent I0122 14:20:13.566760 9 log.go:172] (0xc000994dc0) (0xc0028670e0) Stream removed, broadcasting: 3 I0122 14:20:13.566809 9 log.go:172] (0xc000994dc0) Data frame received for 1 I0122 14:20:13.566821 9 log.go:172] (0xc000b4c8c0) (1) Data frame handling I0122 14:20:13.566830 9 log.go:172] (0xc000b4c8c0) (1) Data frame sent I0122 14:20:13.566836 9 log.go:172] (0xc000994dc0) (0xc000b4c8c0) Stream removed, broadcasting: 1 I0122 14:20:13.566891 9 log.go:172] (0xc000994dc0) (0xc001c70640) Stream removed, broadcasting: 5 I0122 14:20:13.566927 9 log.go:172] (0xc000994dc0) Go away received I0122 14:20:13.567027 9 log.go:172] (0xc000994dc0) (0xc000b4c8c0) Stream removed, broadcasting: 1 I0122 14:20:13.567043 9 log.go:172] (0xc000994dc0) (0xc0028670e0) Stream removed, broadcasting: 3 I0122 14:20:13.567052 9 log.go:172] (0xc000994dc0) (0xc001c70640) Stream removed, broadcasting: 5 Jan 22 14:20:13.567: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:20:13.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9506" for this suite. Jan 22 14:20:37.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:20:37.751: INFO: namespace pod-network-test-9506 deletion completed in 24.175094168s • [SLOW TEST:65.421 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:20:37.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bdc99392-a711-446b-a353-cbf79dd1d981 STEP: Creating a pod to test consume secrets Jan 22 14:20:37.902: INFO: Waiting up to 5m0s for pod "pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9" in namespace "secrets-9044" to be "success or failure" Jan 22 14:20:37.934: INFO: Pod "pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.110237ms Jan 22 14:20:39.942: INFO: Pod "pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040382409s Jan 22 14:20:41.955: INFO: Pod "pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052800053s Jan 22 14:20:43.966: INFO: Pod "pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064385862s Jan 22 14:20:45.976: INFO: Pod "pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073952665s Jan 22 14:20:47.984: INFO: Pod "pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081885731s STEP: Saw pod success Jan 22 14:20:47.984: INFO: Pod "pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9" satisfied condition "success or failure" Jan 22 14:20:47.987: INFO: Trying to get logs from node iruya-node pod pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9 container secret-volume-test: STEP: delete the pod Jan 22 14:20:48.078: INFO: Waiting for pod pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9 to disappear Jan 22 14:20:48.083: INFO: Pod pod-secrets-36c96556-b8d6-4301-83ed-201a7f1ac9d9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:20:48.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9044" for this suite. Jan 22 14:20:54.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:20:54.249: INFO: namespace secrets-9044 deletion completed in 6.159476092s • [SLOW TEST:16.497 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:20:54.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jan 22 14:20:54.330: INFO: Waiting up to 5m0s for pod "var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044" in namespace "var-expansion-2203" to be "success or failure" Jan 22 14:20:54.351: INFO: Pod "var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044": Phase="Pending", Reason="", readiness=false. Elapsed: 20.641635ms Jan 22 14:20:56.360: INFO: Pod "var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029966932s Jan 22 14:20:58.404: INFO: Pod "var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073774438s Jan 22 14:21:00.415: INFO: Pod "var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084475471s Jan 22 14:21:02.421: INFO: Pod "var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090582691s STEP: Saw pod success Jan 22 14:21:02.421: INFO: Pod "var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044" satisfied condition "success or failure" Jan 22 14:21:02.424: INFO: Trying to get logs from node iruya-node pod var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044 container dapi-container: STEP: delete the pod Jan 22 14:21:02.503: INFO: Waiting for pod var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044 to disappear Jan 22 14:21:02.514: INFO: Pod var-expansion-f88dc0b8-62e7-4555-ae9a-fff52e059044 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:21:02.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2203" for this suite. Jan 22 14:21:08.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:21:08.665: INFO: namespace var-expansion-2203 deletion completed in 6.145640446s • [SLOW TEST:14.416 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:21:08.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 14:21:08.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109" in namespace "downward-api-5554" to be "success or failure" Jan 22 14:21:08.794: INFO: Pod "downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109": Phase="Pending", Reason="", readiness=false. Elapsed: 7.172634ms Jan 22 14:21:10.851: INFO: Pod "downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064398356s Jan 22 14:21:12.860: INFO: Pod "downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072737135s Jan 22 14:21:14.868: INFO: Pod "downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080758891s Jan 22 14:21:16.893: INFO: Pod "downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10653336s Jan 22 14:21:18.903: INFO: Pod "downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116554171s STEP: Saw pod success Jan 22 14:21:18.903: INFO: Pod "downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109" satisfied condition "success or failure" Jan 22 14:21:18.909: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109 container client-container: STEP: delete the pod Jan 22 14:21:19.728: INFO: Waiting for pod downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109 to disappear Jan 22 14:21:19.741: INFO: Pod downwardapi-volume-567ba097-c8a4-4f6a-af71-20f4a7d5e109 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:21:19.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5554" for this suite. Jan 22 14:21:25.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:21:26.032: INFO: namespace downward-api-5554 deletion completed in 6.280931515s • [SLOW TEST:17.366 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:21:26.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-4188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4188 to expose endpoints map[] Jan 22 14:21:26.174: INFO: Get endpoints failed (8.88355ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 22 14:21:27.181: INFO: successfully validated that service endpoint-test2 in namespace services-4188 exposes endpoints map[] (1.016101528s elapsed) STEP: Creating pod pod1 in namespace services-4188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4188 to expose endpoints map[pod1:[80]] Jan 22 14:21:31.312: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.119466647s elapsed, will retry) Jan 22 14:21:35.359: INFO: successfully validated that service endpoint-test2 in namespace services-4188 exposes endpoints map[pod1:[80]] (8.16570594s elapsed) STEP: Creating pod pod2 in namespace services-4188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4188 to expose endpoints map[pod1:[80] pod2:[80]] Jan 22 14:21:39.764: INFO: Unexpected endpoints: found map[336caad7-833e-48e3-9101-1b5f45233d14:[80]], expected map[pod1:[80] pod2:[80]] (4.397483565s elapsed, will retry) Jan 22 14:21:42.907: INFO: successfully validated that service endpoint-test2 in namespace services-4188 exposes endpoints map[pod1:[80] pod2:[80]] (7.540936699s elapsed) STEP: Deleting pod pod1 in namespace services-4188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4188 to expose endpoints map[pod2:[80]] Jan 22 14:21:43.002: INFO: successfully validated that service endpoint-test2 in namespace services-4188 exposes endpoints map[pod2:[80]] (78.233082ms elapsed) STEP: Deleting pod pod2 in namespace services-4188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4188 to expose endpoints map[] Jan 22 14:21:43.041: INFO: successfully validated that service endpoint-test2 in namespace services-4188 exposes endpoints map[] (5.700267ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:21:43.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4188" for this suite. Jan 22 14:22:07.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:22:07.311: INFO: namespace services-4188 deletion completed in 24.161171893s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:41.279 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:22:07.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jan 22 14:22:07.438: INFO: Waiting up to 5m0s for pod "pod-4eaf0de5-1289-4449-9dbb-aadab3370995" in namespace "emptydir-8659" to be "success or failure" Jan 22 14:22:07.456: INFO: Pod "pod-4eaf0de5-1289-4449-9dbb-aadab3370995": Phase="Pending", Reason="", readiness=false. Elapsed: 17.08607ms Jan 22 14:22:09.469: INFO: Pod "pod-4eaf0de5-1289-4449-9dbb-aadab3370995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029931804s Jan 22 14:22:11.478: INFO: Pod "pod-4eaf0de5-1289-4449-9dbb-aadab3370995": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039044243s Jan 22 14:22:13.484: INFO: Pod "pod-4eaf0de5-1289-4449-9dbb-aadab3370995": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045779142s Jan 22 14:22:15.491: INFO: Pod "pod-4eaf0de5-1289-4449-9dbb-aadab3370995": Phase="Running", Reason="", readiness=true. Elapsed: 8.052631039s Jan 22 14:22:17.500: INFO: Pod "pod-4eaf0de5-1289-4449-9dbb-aadab3370995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061186626s STEP: Saw pod success Jan 22 14:22:17.500: INFO: Pod "pod-4eaf0de5-1289-4449-9dbb-aadab3370995" satisfied condition "success or failure" Jan 22 14:22:17.505: INFO: Trying to get logs from node iruya-node pod pod-4eaf0de5-1289-4449-9dbb-aadab3370995 container test-container: STEP: delete the pod Jan 22 14:22:17.706: INFO: Waiting for pod pod-4eaf0de5-1289-4449-9dbb-aadab3370995 to disappear Jan 22 14:22:17.736: INFO: Pod pod-4eaf0de5-1289-4449-9dbb-aadab3370995 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:22:17.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8659" for this suite. Jan 22 14:22:23.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:22:23.944: INFO: namespace emptydir-8659 deletion completed in 6.186980346s • [SLOW TEST:16.632 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:22:23.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-068c965a-e424-42cf-bac1-ea9ab6ea2cc0 STEP: Creating a pod to test consume secrets Jan 22 14:22:24.325: INFO: Waiting up to 5m0s for pod "pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1" in namespace "secrets-3281" to be "success or failure" Jan 22 14:22:24.331: INFO: Pod "pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123396ms Jan 22 14:22:26.337: INFO: Pod "pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01193062s Jan 22 14:22:28.834: INFO: Pod "pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508956254s Jan 22 14:22:30.904: INFO: Pod "pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578544421s Jan 22 14:22:32.919: INFO: Pod "pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593664551s Jan 22 14:22:34.928: INFO: Pod "pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.602701461s STEP: Saw pod success Jan 22 14:22:34.928: INFO: Pod "pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1" satisfied condition "success or failure" Jan 22 14:22:34.932: INFO: Trying to get logs from node iruya-node pod pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1 container secret-volume-test: STEP: delete the pod Jan 22 14:22:35.072: INFO: Waiting for pod pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1 to disappear Jan 22 14:22:35.080: INFO: Pod pod-secrets-5f3efb39-a6f2-43e3-b0ac-d8d08de13ef1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:22:35.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3281" for this suite. Jan 22 14:22:41.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:22:41.238: INFO: namespace secrets-3281 deletion completed in 6.150565757s STEP: Destroying namespace "secret-namespace-9480" for this suite. Jan 22 14:22:47.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:22:47.417: INFO: namespace secret-namespace-9480 deletion completed in 6.178517838s • [SLOW TEST:23.473 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:22:47.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 22 14:22:47.530: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 22 14:22:47.541: INFO: Waiting for terminating namespaces to be deleted... Jan 22 14:22:47.546: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 22 14:22:47.562: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 22 14:22:47.562: INFO: Container weave ready: true, restart count 0 Jan 22 14:22:47.562: INFO: Container weave-npc ready: true, restart count 0 Jan 22 14:22:47.562: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 22 14:22:47.562: INFO: Container kube-proxy ready: true, restart count 0 Jan 22 14:22:47.562: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 22 14:22:47.577: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 22 14:22:47.577: INFO: Container etcd ready: true, restart count 0 Jan 22 14:22:47.577: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 22 14:22:47.577: INFO: Container weave ready: true, restart count 0 Jan 22 14:22:47.577: INFO: Container weave-npc ready: true, restart count 0 Jan 22 14:22:47.577: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 22 14:22:47.577: INFO: Container coredns ready: true, restart count 0 Jan 22 14:22:47.577: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 22 14:22:47.577: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 22 14:22:47.577: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 22 14:22:47.577: INFO: Container kube-proxy ready: true, restart count 0 Jan 22 14:22:47.577: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 22 14:22:47.577: INFO: Container kube-apiserver ready: true, restart count 0 Jan 22 14:22:47.577: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 22 14:22:47.577: INFO: Container kube-scheduler ready: true, restart count 13 Jan 22 14:22:47.577: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 22 14:22:47.577: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ec3b7abb65db27], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:22:48.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5623" for this suite. Jan 22 14:22:56.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:22:56.803: INFO: namespace sched-pred-5623 deletion completed in 8.153170246s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:9.385 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:22:56.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-10066ba1-2fde-4209-9e23-e8c94509fc11 STEP: Creating a pod to test consume configMaps Jan 22 14:22:56.956: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272" in namespace "projected-6353" to be "success or failure" Jan 22 14:22:56.978: INFO: Pod "pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272": Phase="Pending", Reason="", readiness=false. Elapsed: 22.655209ms Jan 22 14:22:58.986: INFO: Pod "pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029973758s Jan 22 14:23:00.999: INFO: Pod "pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043051776s Jan 22 14:23:03.006: INFO: Pod "pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049954764s Jan 22 14:23:05.014: INFO: Pod "pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057868044s Jan 22 14:23:07.021: INFO: Pod "pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065689782s STEP: Saw pod success Jan 22 14:23:07.021: INFO: Pod "pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272" satisfied condition "success or failure" Jan 22 14:23:07.026: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272 container projected-configmap-volume-test: STEP: delete the pod Jan 22 14:23:07.113: INFO: Waiting for pod pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272 to disappear Jan 22 14:23:07.132: INFO: Pod pod-projected-configmaps-db58bce4-5711-437a-bde3-f1ddab299272 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:23:07.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6353" for this suite. Jan 22 14:23:13.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:23:13.417: INFO: namespace projected-6353 deletion completed in 6.222432488s • [SLOW TEST:16.614 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:23:13.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-dc7fa25d-c705-462f-8706-7da09d8aece0 STEP: Creating a pod to test consume configMaps Jan 22 14:23:13.535: INFO: Waiting up to 5m0s for pod "pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685" in namespace "configmap-3277" to be "success or failure" Jan 22 14:23:13.542: INFO: Pod "pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685": Phase="Pending", Reason="", readiness=false. Elapsed: 6.49833ms Jan 22 14:23:15.549: INFO: Pod "pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01334504s Jan 22 14:23:17.558: INFO: Pod "pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022751361s Jan 22 14:23:19.576: INFO: Pod "pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040253735s Jan 22 14:23:21.615: INFO: Pod "pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079349331s Jan 22 14:23:23.625: INFO: Pod "pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089872085s STEP: Saw pod success Jan 22 14:23:23.625: INFO: Pod "pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685" satisfied condition "success or failure" Jan 22 14:23:23.630: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685 container configmap-volume-test: STEP: delete the pod Jan 22 14:23:23.696: INFO: Waiting for pod pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685 to disappear Jan 22 14:23:23.709: INFO: Pod pod-configmaps-1639dfc6-9aa3-4683-89f8-3e316a5c2685 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:23:23.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3277" for this suite. Jan 22 14:23:29.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:23:29.912: INFO: namespace configmap-3277 deletion completed in 6.195737435s • [SLOW TEST:16.493 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:23:29.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 22 14:23:30.051: INFO: Waiting up to 5m0s for pod "pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218" in namespace "emptydir-1243" to be "success or failure" Jan 22 14:23:30.064: INFO: Pod "pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218": Phase="Pending", Reason="", readiness=false. Elapsed: 12.482618ms Jan 22 14:23:32.073: INFO: Pod "pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021492032s Jan 22 14:23:34.085: INFO: Pod "pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033208506s Jan 22 14:23:36.093: INFO: Pod "pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042106713s Jan 22 14:23:38.101: INFO: Pod "pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049651289s Jan 22 14:23:40.115: INFO: Pod "pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063986687s STEP: Saw pod success Jan 22 14:23:40.115: INFO: Pod "pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218" satisfied condition "success or failure" Jan 22 14:23:40.120: INFO: Trying to get logs from node iruya-node pod pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218 container test-container: STEP: delete the pod Jan 22 14:23:40.481: INFO: Waiting for pod pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218 to disappear Jan 22 14:23:40.491: INFO: Pod pod-774e82f2-c82b-4ef6-92ad-90a9d9bbe218 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:23:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1243" for this suite. Jan 22 14:23:46.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:23:46.664: INFO: namespace emptydir-1243 deletion completed in 6.156916763s • [SLOW TEST:16.752 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:23:46.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 22 14:23:46.799: INFO: Waiting up to 5m0s for pod "pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627" in namespace "emptydir-1241" to be "success or failure" Jan 22 14:23:46.805: INFO: Pod "pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627": Phase="Pending", Reason="", readiness=false. Elapsed: 5.941279ms Jan 22 14:23:48.817: INFO: Pod "pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018399472s Jan 22 14:23:50.884: INFO: Pod "pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084845411s Jan 22 14:23:52.900: INFO: Pod "pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100617193s Jan 22 14:23:54.908: INFO: Pod "pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108500338s STEP: Saw pod success Jan 22 14:23:54.908: INFO: Pod "pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627" satisfied condition "success or failure" Jan 22 14:23:54.914: INFO: Trying to get logs from node iruya-node pod pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627 container test-container: STEP: delete the pod Jan 22 14:23:54.977: INFO: Waiting for pod pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627 to disappear Jan 22 14:23:54.980: INFO: Pod pod-2e2002e3-1d70-4cc4-9aa4-df14f2a32627 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:23:54.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1241" for this suite. Jan 22 14:24:01.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:24:01.159: INFO: namespace emptydir-1241 deletion completed in 6.174786535s • [SLOW TEST:14.494 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:24:01.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 22 14:24:01.236: INFO: Waiting up to 5m0s for pod "pod-ff30a80a-3b78-425a-9856-9be795df950c" in namespace "emptydir-3960" to be "success or failure" Jan 22 14:24:01.242: INFO: Pod "pod-ff30a80a-3b78-425a-9856-9be795df950c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142648ms Jan 22 14:24:03.249: INFO: Pod "pod-ff30a80a-3b78-425a-9856-9be795df950c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013363261s Jan 22 14:24:05.255: INFO: Pod "pod-ff30a80a-3b78-425a-9856-9be795df950c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019541275s Jan 22 14:24:07.264: INFO: Pod "pod-ff30a80a-3b78-425a-9856-9be795df950c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027972801s Jan 22 14:24:09.276: INFO: Pod "pod-ff30a80a-3b78-425a-9856-9be795df950c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040681101s STEP: Saw pod success Jan 22 14:24:09.277: INFO: Pod "pod-ff30a80a-3b78-425a-9856-9be795df950c" satisfied condition "success or failure" Jan 22 14:24:09.282: INFO: Trying to get logs from node iruya-node pod pod-ff30a80a-3b78-425a-9856-9be795df950c container test-container: STEP: delete the pod Jan 22 14:24:09.347: INFO: Waiting for pod pod-ff30a80a-3b78-425a-9856-9be795df950c to disappear Jan 22 14:24:09.371: INFO: Pod pod-ff30a80a-3b78-425a-9856-9be795df950c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:24:09.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3960" for this suite. Jan 22 14:24:15.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:24:15.545: INFO: namespace emptydir-3960 deletion completed in 6.158338761s • [SLOW TEST:14.385 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:24:15.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 14:24:15.612: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1" in namespace "projected-4147" to be "success or failure" Jan 22 14:24:15.616: INFO: Pod "downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.765439ms Jan 22 14:24:17.623: INFO: Pod "downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011531643s Jan 22 14:24:19.632: INFO: Pod "downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020131643s Jan 22 14:24:21.643: INFO: Pod "downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031273069s Jan 22 14:24:23.652: INFO: Pod "downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040417243s Jan 22 14:24:25.660: INFO: Pod "downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048501417s STEP: Saw pod success Jan 22 14:24:25.661: INFO: Pod "downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1" satisfied condition "success or failure" Jan 22 14:24:25.664: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1 container client-container: STEP: delete the pod Jan 22 14:24:25.750: INFO: Waiting for pod downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1 to disappear Jan 22 14:24:25.834: INFO: Pod downwardapi-volume-a8828b6c-ba60-4dff-84e1-182d182e9fb1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:24:25.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4147" for this suite. Jan 22 14:24:31.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:24:31.975: INFO: namespace projected-4147 deletion completed in 6.131639549s • [SLOW TEST:16.430 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:24:31.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 22 14:24:32.079: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-a,UID:111f661a-3d5a-4496-8229-bc7aa45308cf,ResourceVersion:21444070,Generation:0,CreationTimestamp:2020-01-22 14:24:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 22 14:24:32.079: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-a,UID:111f661a-3d5a-4496-8229-bc7aa45308cf,ResourceVersion:21444070,Generation:0,CreationTimestamp:2020-01-22 14:24:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 22 14:24:42.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-a,UID:111f661a-3d5a-4496-8229-bc7aa45308cf,ResourceVersion:21444084,Generation:0,CreationTimestamp:2020-01-22 14:24:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 22 14:24:42.107: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-a,UID:111f661a-3d5a-4496-8229-bc7aa45308cf,ResourceVersion:21444084,Generation:0,CreationTimestamp:2020-01-22 14:24:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 22 14:24:52.128: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-a,UID:111f661a-3d5a-4496-8229-bc7aa45308cf,ResourceVersion:21444099,Generation:0,CreationTimestamp:2020-01-22 14:24:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 22 14:24:52.129: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-a,UID:111f661a-3d5a-4496-8229-bc7aa45308cf,ResourceVersion:21444099,Generation:0,CreationTimestamp:2020-01-22 14:24:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 22 14:25:02.142: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-a,UID:111f661a-3d5a-4496-8229-bc7aa45308cf,ResourceVersion:21444114,Generation:0,CreationTimestamp:2020-01-22 14:24:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 22 14:25:02.142: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-a,UID:111f661a-3d5a-4496-8229-bc7aa45308cf,ResourceVersion:21444114,Generation:0,CreationTimestamp:2020-01-22 14:24:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 22 14:25:12.154: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-b,UID:b9cd5b4e-591d-4d37-8048-2d1542772b29,ResourceVersion:21444128,Generation:0,CreationTimestamp:2020-01-22 14:25:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 22 14:25:12.155: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-b,UID:b9cd5b4e-591d-4d37-8048-2d1542772b29,ResourceVersion:21444128,Generation:0,CreationTimestamp:2020-01-22 14:25:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 22 14:25:22.998: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-b,UID:b9cd5b4e-591d-4d37-8048-2d1542772b29,ResourceVersion:21444143,Generation:0,CreationTimestamp:2020-01-22 14:25:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 22 14:25:22.998: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7426,SelfLink:/api/v1/namespaces/watch-7426/configmaps/e2e-watch-test-configmap-b,UID:b9cd5b4e-591d-4d37-8048-2d1542772b29,ResourceVersion:21444143,Generation:0,CreationTimestamp:2020-01-22 14:25:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:25:32.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7426" for this suite. Jan 22 14:25:39.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:25:39.115: INFO: namespace watch-7426 deletion completed in 6.111068747s • [SLOW TEST:67.139 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:25:39.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 22 14:25:39.840: INFO: created pod pod-service-account-defaultsa Jan 22 14:25:39.840: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 22 14:25:39.862: INFO: created pod pod-service-account-mountsa Jan 22 14:25:39.862: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 22 14:25:39.878: INFO: created pod pod-service-account-nomountsa Jan 22 14:25:39.878: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 22 14:25:40.010: INFO: created pod pod-service-account-defaultsa-mountspec Jan 22 14:25:40.010: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 22 14:25:40.034: INFO: created pod pod-service-account-mountsa-mountspec Jan 22 14:25:40.034: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 22 14:25:40.128: INFO: created pod pod-service-account-nomountsa-mountspec Jan 22 14:25:40.128: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 22 14:25:40.328: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 22 14:25:40.329: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 22 14:25:40.372: INFO: created pod pod-service-account-mountsa-nomountspec Jan 22 14:25:40.372: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 22 14:25:40.600: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 22 14:25:40.600: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:25:40.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-811" for this suite. Jan 22 14:26:09.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:26:09.230: INFO: namespace svcaccounts-811 deletion completed in 27.566255777s • [SLOW TEST:30.115 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:26:09.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 22 14:26:09.377: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2188,SelfLink:/api/v1/namespaces/watch-2188/configmaps/e2e-watch-test-watch-closed,UID:1405aae4-553b-4d5a-ade3-1b877b481407,ResourceVersion:21444311,Generation:0,CreationTimestamp:2020-01-22 14:26:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 22 14:26:09.377: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2188,SelfLink:/api/v1/namespaces/watch-2188/configmaps/e2e-watch-test-watch-closed,UID:1405aae4-553b-4d5a-ade3-1b877b481407,ResourceVersion:21444312,Generation:0,CreationTimestamp:2020-01-22 14:26:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 22 14:26:09.404: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2188,SelfLink:/api/v1/namespaces/watch-2188/configmaps/e2e-watch-test-watch-closed,UID:1405aae4-553b-4d5a-ade3-1b877b481407,ResourceVersion:21444313,Generation:0,CreationTimestamp:2020-01-22 14:26:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 22 14:26:09.405: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2188,SelfLink:/api/v1/namespaces/watch-2188/configmaps/e2e-watch-test-watch-closed,UID:1405aae4-553b-4d5a-ade3-1b877b481407,ResourceVersion:21444314,Generation:0,CreationTimestamp:2020-01-22 14:26:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:26:09.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2188" for this suite. Jan 22 14:26:15.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:26:15.651: INFO: namespace watch-2188 deletion completed in 6.137226565s • [SLOW TEST:6.420 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:26:15.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-p2kl STEP: Creating a pod to test atomic-volume-subpath Jan 22 14:26:15.777: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-p2kl" in namespace "subpath-4787" to be "success or failure" Jan 22 14:26:15.780: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.799359ms Jan 22 14:26:17.794: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017712222s Jan 22 14:26:19.802: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025403092s Jan 22 14:26:21.810: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033041312s Jan 22 14:26:23.819: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042794435s Jan 22 14:26:25.828: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 10.050998373s Jan 22 14:26:27.835: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 12.058864148s Jan 22 14:26:29.847: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 14.070191975s Jan 22 14:26:31.858: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 16.081035204s Jan 22 14:26:33.876: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 18.098892161s Jan 22 14:26:35.890: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 20.11381108s Jan 22 14:26:37.900: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 22.123466014s Jan 22 14:26:39.911: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 24.133934273s Jan 22 14:26:41.919: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 26.142190086s Jan 22 14:26:43.934: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Running", Reason="", readiness=true. Elapsed: 28.157188085s Jan 22 14:26:45.944: INFO: Pod "pod-subpath-test-secret-p2kl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.1676425s STEP: Saw pod success Jan 22 14:26:45.944: INFO: Pod "pod-subpath-test-secret-p2kl" satisfied condition "success or failure" Jan 22 14:26:45.948: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-p2kl container test-container-subpath-secret-p2kl: STEP: delete the pod Jan 22 14:26:46.051: INFO: Waiting for pod pod-subpath-test-secret-p2kl to disappear Jan 22 14:26:46.055: INFO: Pod pod-subpath-test-secret-p2kl no longer exists STEP: Deleting pod pod-subpath-test-secret-p2kl Jan 22 14:26:46.055: INFO: Deleting pod "pod-subpath-test-secret-p2kl" in namespace "subpath-4787" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:26:46.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4787" for this suite. Jan 22 14:26:52.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:26:52.186: INFO: namespace subpath-4787 deletion completed in 6.122911867s • [SLOW TEST:36.535 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:26:52.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 14:26:52.380: INFO: Creating deployment "nginx-deployment" Jan 22 14:26:52.406: INFO: Waiting for observed generation 1 Jan 22 14:26:54.816: INFO: Waiting for all required pods to come up Jan 22 14:26:54.834: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 22 14:27:23.509: INFO: Waiting for deployment "nginx-deployment" to complete Jan 22 14:27:23.521: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 22 14:27:23.533: INFO: Updating deployment nginx-deployment Jan 22 14:27:23.533: INFO: Waiting for observed generation 2 Jan 22 14:27:26.561: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 22 14:27:26.643: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 22 14:27:27.106: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 22 14:27:27.120: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 22 14:27:27.120: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 22 14:27:27.123: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 22 14:27:27.129: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 22 14:27:27.129: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 22 14:27:27.149: INFO: Updating deployment nginx-deployment Jan 22 14:27:27.149: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 22 14:27:27.555: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 22 14:27:29.464: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 22 14:27:38.921: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4609,SelfLink:/apis/apps/v1/namespaces/deployment-4609/deployments/nginx-deployment,UID:8cc4cbb9-b07d-41eb-ac96-4db19d47b9df,ResourceVersion:21444676,Generation:3,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-22 14:27:25 +0000 UTC 2020-01-22 14:26:52 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-22 14:27:27 +0000 UTC 2020-01-22 14:27:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 22 14:27:40.426: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4609,SelfLink:/apis/apps/v1/namespaces/deployment-4609/replicasets/nginx-deployment-55fb7cb77f,UID:8d6ed128-6107-4f9a-a08a-1c686c6cafef,ResourceVersion:21444688,Generation:3,CreationTimestamp:2020-01-22 14:27:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8cc4cbb9-b07d-41eb-ac96-4db19d47b9df 0xc0016cf007 0xc0016cf008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 22 14:27:40.426: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 22 14:27:40.426: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4609,SelfLink:/apis/apps/v1/namespaces/deployment-4609/replicasets/nginx-deployment-7b8c6f4498,UID:b1c59b8f-823b-4a83-9f96-e7d22be04c96,ResourceVersion:21444673,Generation:3,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8cc4cbb9-b07d-41eb-ac96-4db19d47b9df 0xc0016cf0d7 0xc0016cf0d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 22 14:27:42.248: INFO: Pod "nginx-deployment-55fb7cb77f-4hbgw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4hbgw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-4hbgw,UID:6f1fc576-37ad-4d2a-8902-6ad5678c99b9,ResourceVersion:21444685,Generation:0,CreationTimestamp:2020-01-22 14:27:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294eb87 0xc00294eb88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294ec00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294ec20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-22 14:27:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.249: INFO: Pod "nginx-deployment-55fb7cb77f-5wn9z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5wn9z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-5wn9z,UID:ecdafe0a-8d87-497e-9813-6d9a0211eb37,ResourceVersion:21444607,Generation:0,CreationTimestamp:2020-01-22 14:27:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294ecf7 0xc00294ecf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294ed60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294ed80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-22 14:27:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.249: INFO: Pod "nginx-deployment-55fb7cb77f-cl7w2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cl7w2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-cl7w2,UID:feeba4fb-4394-403f-8d0c-3eda7411fab1,ResourceVersion:21444599,Generation:0,CreationTimestamp:2020-01-22 14:27:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294ee57 0xc00294ee58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294eed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294eef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-22 14:27:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.249: INFO: Pod "nginx-deployment-55fb7cb77f-hjzbb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hjzbb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-hjzbb,UID:8ef150cd-b675-4533-a296-eadc14d6102b,ResourceVersion:21444682,Generation:0,CreationTimestamp:2020-01-22 14:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294efc7 0xc00294efc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294f030} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294f050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-22 14:27:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.249: INFO: Pod "nginx-deployment-55fb7cb77f-kf7lr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kf7lr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-kf7lr,UID:b45782a0-88b3-4e20-82b7-90948bcc4293,ResourceVersion:21444670,Generation:0,CreationTimestamp:2020-01-22 14:27:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294f127 0xc00294f128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294f190} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294f1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.250: INFO: Pod "nginx-deployment-55fb7cb77f-l7chf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l7chf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-l7chf,UID:fafdc39c-d5ec-4408-b041-2158c1cc7a5e,ResourceVersion:21444602,Generation:0,CreationTimestamp:2020-01-22 14:27:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294f237 0xc00294f238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294f2a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294f2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-22 14:27:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.250: INFO: Pod "nginx-deployment-55fb7cb77f-ll994" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ll994,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-ll994,UID:0e22588e-3903-4284-ac4b-ce4d3ce06f17,ResourceVersion:21444649,Generation:0,CreationTimestamp:2020-01-22 14:27:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294f397 0xc00294f398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294f400} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294f420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.250: INFO: Pod "nginx-deployment-55fb7cb77f-mgmj2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mgmj2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-mgmj2,UID:74238372-d587-4ae2-8b24-3f3724997809,ResourceVersion:21444655,Generation:0,CreationTimestamp:2020-01-22 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294f4a7 0xc00294f4a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294f520} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294f540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.250: INFO: Pod "nginx-deployment-55fb7cb77f-pblqj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pblqj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-pblqj,UID:8ba8fde8-9e7d-4961-b914-08b388507159,ResourceVersion:21444697,Generation:0,CreationTimestamp:2020-01-22 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294f5c7 0xc00294f5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294f640} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294f660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-22 14:27:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.251: INFO: Pod "nginx-deployment-55fb7cb77f-q7v9r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q7v9r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-q7v9r,UID:a535b0ba-c88b-4f74-b278-5fc29a31f8fd,ResourceVersion:21444604,Generation:0,CreationTimestamp:2020-01-22 14:27:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294f737 0xc00294f738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294f7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294f7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-22 14:27:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.251: INFO: Pod "nginx-deployment-55fb7cb77f-qgk9p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qgk9p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-qgk9p,UID:9e80517d-a8c1-44e8-96a5-5327f83132a2,ResourceVersion:21444660,Generation:0,CreationTimestamp:2020-01-22 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294f8a7 0xc00294f8a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294f920} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294f940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.251: INFO: Pod "nginx-deployment-55fb7cb77f-ssq87" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ssq87,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-ssq87,UID:47fa56ea-6645-4c34-af72-e452a12a84cf,ResourceVersion:21444663,Generation:0,CreationTimestamp:2020-01-22 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294f9c7 0xc00294f9c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294fa40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294fa60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.251: INFO: Pod "nginx-deployment-55fb7cb77f-wdh6h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wdh6h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-55fb7cb77f-wdh6h,UID:3175d8f5-fcb8-41fb-9562-e54a9b3bb799,ResourceVersion:21444612,Generation:0,CreationTimestamp:2020-01-22 14:27:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8d6ed128-6107-4f9a-a08a-1c686c6cafef 0xc00294fae7 0xc00294fae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294fb60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294fb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-22 14:27:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.252: INFO: Pod "nginx-deployment-7b8c6f4498-25spj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-25spj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-25spj,UID:a542d03f-5bd5-4773-ae3b-97753e494a0e,ResourceVersion:21444527,Generation:0,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00294fc67 0xc00294fc68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294fce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294fd00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-22 14:26:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 14:27:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c93e967ab0dc2317645a1138273ae5c312fb6a7101f1aa3e4161893bd23b09eb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.252: INFO: Pod "nginx-deployment-7b8c6f4498-28qcn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-28qcn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-28qcn,UID:0318aae0-3ee6-4ed7-b393-908108879936,ResourceVersion:21444520,Generation:0,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00294fdd7 0xc00294fdd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294fe50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294fe70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-22 14:26:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 14:27:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cfe2174fba99dd5a72c0b4c910d8c8628d8a3f01d3e84ad79cd5646c88e02431}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.252: INFO: Pod "nginx-deployment-7b8c6f4498-29scq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-29scq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-29scq,UID:4b1e66e6-9203-4f5c-9d12-ffe9b0bbf0a3,ResourceVersion:21444665,Generation:0,CreationTimestamp:2020-01-22 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00294ff47 0xc00294ff48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00294ffb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00294ffd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.252: INFO: Pod "nginx-deployment-7b8c6f4498-2tgxh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2tgxh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-2tgxh,UID:a771320d-17d4-4489-af8a-127a092e5d43,ResourceVersion:21444514,Generation:0,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc002ebe057 0xc002ebe058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-22 14:26:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 14:27:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f67e44aa6083b23b97ad8537ac8f86cd1eef2c3ea2f2984265e8a51ff0a32eeb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.252: INFO: Pod "nginx-deployment-7b8c6f4498-59vp9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-59vp9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-59vp9,UID:d2a4c90f-4dfe-46a1-97b7-657883cf62d3,ResourceVersion:21444650,Generation:0,CreationTimestamp:2020-01-22 14:27:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc002ebe937 0xc002ebe938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebecd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebede0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.253: INFO: Pod "nginx-deployment-7b8c6f4498-62592" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-62592,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-62592,UID:6de018f6-5a29-44ef-b288-e94c678529d2,ResourceVersion:21444696,Generation:0,CreationTimestamp:2020-01-22 14:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc002ebf107 0xc002ebf108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebf200} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebf220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-22 14:27:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.253: INFO: Pod "nginx-deployment-7b8c6f4498-65mtw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-65mtw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-65mtw,UID:d9b157fc-9994-4bc2-93c2-ecf3efa80aa2,ResourceVersion:21444652,Generation:0,CreationTimestamp:2020-01-22 14:27:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc002ebf447 0xc002ebf448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebf650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebf670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.253: INFO: Pod "nginx-deployment-7b8c6f4498-7gj2q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7gj2q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-7gj2q,UID:e7eeb99e-66c3-4ff0-baf2-8ca21898cf4e,ResourceVersion:21444651,Generation:0,CreationTimestamp:2020-01-22 14:27:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc002ebf857 0xc002ebf858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebfb70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebfb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.253: INFO: Pod "nginx-deployment-7b8c6f4498-9nt8h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9nt8h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-9nt8h,UID:ac32ec47-46a4-41f7-b04c-7e083dd164f6,ResourceVersion:21444664,Generation:0,CreationTimestamp:2020-01-22 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc002ebfcd7 0xc002ebfcd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebff60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebff80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.254: INFO: Pod "nginx-deployment-7b8c6f4498-bfnv9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bfnv9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-bfnv9,UID:787543a5-fd6e-43fd-9c3a-8a1abd06c9e2,ResourceVersion:21444657,Generation:0,CreationTimestamp:2020-01-22 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277e337 0xc00277e338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277e3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277e3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.254: INFO: Pod "nginx-deployment-7b8c6f4498-c8pll" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c8pll,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-c8pll,UID:a327faa5-dacc-43f4-8f6c-a74db027954c,ResourceVersion:21444540,Generation:0,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277e457 0xc00277e458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277e4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277e4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-22 14:26:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 14:27:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f18a4dc6bc2d02f7505bc6fedfb26278486adcc66371dabc8189c26de7517a3f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.254: INFO: Pod "nginx-deployment-7b8c6f4498-drlc5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-drlc5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-drlc5,UID:dd911882-c9bd-409a-9c5e-d463d625d6fa,ResourceVersion:21444552,Generation:0,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277e6d7 0xc00277e6d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277e860} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277e9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-22 14:26:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 14:27:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c2e764630ac0e0de91f373136bc658a4bef9c69f40b3873ed29b42fab68fd6db}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.254: INFO: Pod "nginx-deployment-7b8c6f4498-fp6jm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fp6jm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-fp6jm,UID:4305e83d-2476-483f-a446-829b92859130,ResourceVersion:21444666,Generation:0,CreationTimestamp:2020-01-22 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277eb87 0xc00277eb88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277ecd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277ed70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.255: INFO: Pod "nginx-deployment-7b8c6f4498-gj87k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gj87k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-gj87k,UID:b7f60501-802a-4520-94cf-7720abeca67f,ResourceVersion:21444668,Generation:0,CreationTimestamp:2020-01-22 14:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277eef7 0xc00277eef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277ef70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277ef90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-22 14:27:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.255: INFO: Pod "nginx-deployment-7b8c6f4498-pt9wh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pt9wh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-pt9wh,UID:e9a87023-734c-4f55-8703-3ff9dd5a0976,ResourceVersion:21444648,Generation:0,CreationTimestamp:2020-01-22 14:27:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277f077 0xc00277f078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277f110} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277f220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.255: INFO: Pod "nginx-deployment-7b8c6f4498-pvfx8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pvfx8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-pvfx8,UID:ef53512c-9e6e-492a-b625-37f17fc57b7e,ResourceVersion:21444661,Generation:0,CreationTimestamp:2020-01-22 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277f2a7 0xc00277f2a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277f320} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277f340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.255: INFO: Pod "nginx-deployment-7b8c6f4498-q6kxx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q6kxx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-q6kxx,UID:7345b27a-517e-4ddf-bd2c-a0f890e89f37,ResourceVersion:21444669,Generation:0,CreationTimestamp:2020-01-22 14:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277f3c7 0xc00277f3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277f430} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277f450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-22 14:27:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.255: INFO: Pod "nginx-deployment-7b8c6f4498-qc887" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qc887,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-qc887,UID:43e094ed-1748-436b-b787-b79972477ee8,ResourceVersion:21444537,Generation:0,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277f517 0xc00277f518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277f6a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277f700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-01-22 14:26:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 14:27:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://07ca30ceb3006fcfbc4d464f3b6fb3486d1a4664b9d1a1ce21caa078060be892}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.256: INFO: Pod "nginx-deployment-7b8c6f4498-w8fgn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w8fgn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-w8fgn,UID:d8e12e8f-2f00-44c0-811e-6d24ef806e08,ResourceVersion:21444531,Generation:0,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277f7d7 0xc00277f7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277f850} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277f870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-22 14:26:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 14:27:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f325b89f7e711a8cdd237417da30af43c10cb4a167d95de6647857932a63ca3d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 22 14:27:42.256: INFO: Pod "nginx-deployment-7b8c6f4498-zwhbf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zwhbf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4609,SelfLink:/api/v1/namespaces/deployment-4609/pods/nginx-deployment-7b8c6f4498-zwhbf,UID:6e3f85dd-7f9d-45c4-9922-6de8df5092d8,ResourceVersion:21444525,Generation:0,CreationTimestamp:2020-01-22 14:26:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1c59b8f-823b-4a83-9f96-e7d22be04c96 0xc00277f947 0xc00277f948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8s9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8s9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8s9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00277f9d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00277f9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:27:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:26:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-22 14:26:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 14:27:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f0ff42dd115a57279108e3173c9b0a5bc4fe4f15d1705711deadfaa864f65d8d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:27:42.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4609" for this suite. Jan 22 14:28:49.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:28:50.014: INFO: namespace deployment-4609 deletion completed in 1m6.468403535s • [SLOW TEST:117.827 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:28:50.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d8c59206-c754-4daf-82c9-347a73c9d1e8 STEP: Creating a pod to test consume configMaps Jan 22 14:28:54.913: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8" in namespace "projected-3848" to be "success or failure" Jan 22 14:28:56.065: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.151861364s Jan 22 14:28:58.662: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.749115938s Jan 22 14:29:00.816: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.903055566s Jan 22 14:29:03.111: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197503945s Jan 22 14:29:05.123: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.20949223s Jan 22 14:29:07.129: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.21606492s Jan 22 14:29:09.138: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.224932944s Jan 22 14:29:11.155: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.242061895s Jan 22 14:29:13.169: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.255933067s Jan 22 14:29:15.181: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.267594047s STEP: Saw pod success Jan 22 14:29:15.181: INFO: Pod "pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8" satisfied condition "success or failure" Jan 22 14:29:15.185: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8 container projected-configmap-volume-test: STEP: delete the pod Jan 22 14:29:15.395: INFO: Waiting for pod pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8 to disappear Jan 22 14:29:15.405: INFO: Pod pod-projected-configmaps-6d1bbe3c-7562-47a6-ad8c-9970a93c2bc8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:29:15.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3848" for this suite. Jan 22 14:29:21.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:29:21.590: INFO: namespace projected-3848 deletion completed in 6.180285277s • [SLOW TEST:31.576 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:29:21.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 22 14:29:21.788: INFO: Waiting up to 5m0s for pod "pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17" in namespace "emptydir-8283" to be "success or failure" Jan 22 14:29:21.798: INFO: Pod "pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17": Phase="Pending", Reason="", readiness=false. Elapsed: 9.595247ms Jan 22 14:29:23.814: INFO: Pod "pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026288704s Jan 22 14:29:25.824: INFO: Pod "pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035469979s Jan 22 14:29:27.830: INFO: Pod "pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041617245s Jan 22 14:29:29.837: INFO: Pod "pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048507328s Jan 22 14:29:31.844: INFO: Pod "pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055550861s STEP: Saw pod success Jan 22 14:29:31.844: INFO: Pod "pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17" satisfied condition "success or failure" Jan 22 14:29:31.848: INFO: Trying to get logs from node iruya-node pod pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17 container test-container: STEP: delete the pod Jan 22 14:29:32.509: INFO: Waiting for pod pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17 to disappear Jan 22 14:29:32.574: INFO: Pod pod-92cac4a9-fcd8-4aaf-a60b-7fa6214a0f17 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:29:32.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8283" for this suite. Jan 22 14:29:38.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:29:38.838: INFO: namespace emptydir-8283 deletion completed in 6.186907716s • [SLOW TEST:17.247 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:29:38.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9030 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-9030 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9030 Jan 22 14:29:39.004: INFO: Found 0 stateful pods, waiting for 1 Jan 22 14:29:49.016: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 22 14:29:59.014: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 22 14:29:59.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 14:30:01.381: INFO: stderr: "I0122 14:30:00.982890 2658 log.go:172] (0xc000668b00) (0xc000584aa0) Create stream\nI0122 14:30:00.983020 2658 log.go:172] (0xc000668b00) (0xc000584aa0) Stream added, broadcasting: 1\nI0122 14:30:00.992418 2658 log.go:172] (0xc000668b00) Reply frame received for 1\nI0122 14:30:00.992495 2658 log.go:172] (0xc000668b00) (0xc0008e2000) Create stream\nI0122 14:30:00.992504 2658 log.go:172] (0xc000668b00) (0xc0008e2000) Stream added, broadcasting: 3\nI0122 14:30:00.995002 2658 log.go:172] (0xc000668b00) Reply frame received for 3\nI0122 14:30:00.995026 2658 log.go:172] (0xc000668b00) (0xc0008e20a0) Create stream\nI0122 14:30:00.995034 2658 log.go:172] (0xc000668b00) (0xc0008e20a0) Stream added, broadcasting: 5\nI0122 14:30:00.996755 2658 log.go:172] (0xc000668b00) Reply frame received for 5\nI0122 14:30:01.124925 2658 log.go:172] (0xc000668b00) Data frame received for 5\nI0122 14:30:01.124997 2658 log.go:172] (0xc0008e20a0) (5) Data frame handling\nI0122 14:30:01.125012 2658 log.go:172] (0xc0008e20a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 14:30:01.181404 2658 log.go:172] (0xc000668b00) Data frame received for 3\nI0122 14:30:01.181457 2658 log.go:172] (0xc0008e2000) (3) Data frame handling\nI0122 14:30:01.181473 2658 log.go:172] (0xc0008e2000) (3) Data frame sent\nI0122 14:30:01.367662 2658 log.go:172] (0xc000668b00) (0xc0008e2000) Stream removed, broadcasting: 3\nI0122 14:30:01.367892 2658 log.go:172] (0xc000668b00) Data frame received for 1\nI0122 14:30:01.367927 2658 log.go:172] (0xc000584aa0) (1) Data frame handling\nI0122 14:30:01.367966 2658 log.go:172] (0xc000668b00) (0xc0008e20a0) Stream removed, broadcasting: 5\nI0122 14:30:01.368052 2658 log.go:172] (0xc000584aa0) (1) Data frame sent\nI0122 14:30:01.368074 2658 log.go:172] (0xc000668b00) (0xc000584aa0) Stream removed, broadcasting: 1\nI0122 14:30:01.368112 2658 log.go:172] (0xc000668b00) Go away received\nI0122 14:30:01.368922 2658 log.go:172] (0xc000668b00) (0xc000584aa0) Stream removed, broadcasting: 1\nI0122 14:30:01.368952 2658 log.go:172] (0xc000668b00) (0xc0008e2000) Stream removed, broadcasting: 3\nI0122 14:30:01.368958 2658 log.go:172] (0xc000668b00) (0xc0008e20a0) Stream removed, broadcasting: 5\n" Jan 22 14:30:01.381: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 14:30:01.382: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 14:30:01.395: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 22 14:30:11.404: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 22 14:30:11.404: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 14:30:11.472: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:11.472: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:11.472: INFO: Jan 22 14:30:11.472: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 22 14:30:12.764: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.967739034s Jan 22 14:30:14.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.675506714s Jan 22 14:30:15.241: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.207245187s Jan 22 14:30:16.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.197994467s Jan 22 14:30:17.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.18542943s Jan 22 14:30:18.378: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.171633666s Jan 22 14:30:19.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.06121697s Jan 22 14:30:20.413: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.039397872s Jan 22 14:30:21.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 26.464327ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9030 Jan 22 14:30:22.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:30:23.174: INFO: stderr: "I0122 14:30:22.881204 2693 log.go:172] (0xc0008d20b0) (0xc000a16140) Create stream\nI0122 14:30:22.881394 2693 log.go:172] (0xc0008d20b0) (0xc000a16140) Stream added, broadcasting: 1\nI0122 14:30:22.887794 2693 log.go:172] (0xc0008d20b0) Reply frame received for 1\nI0122 14:30:22.887820 2693 log.go:172] (0xc0008d20b0) (0xc000a161e0) Create stream\nI0122 14:30:22.887826 2693 log.go:172] (0xc0008d20b0) (0xc000a161e0) Stream added, broadcasting: 3\nI0122 14:30:22.889261 2693 log.go:172] (0xc0008d20b0) Reply frame received for 3\nI0122 14:30:22.889286 2693 log.go:172] (0xc0008d20b0) (0xc00067c1e0) Create stream\nI0122 14:30:22.889301 2693 log.go:172] (0xc0008d20b0) (0xc00067c1e0) Stream added, broadcasting: 5\nI0122 14:30:22.894107 2693 log.go:172] (0xc0008d20b0) Reply frame received for 5\nI0122 14:30:23.005391 2693 log.go:172] (0xc0008d20b0) Data frame received for 5\nI0122 14:30:23.005475 2693 log.go:172] (0xc00067c1e0) (5) Data frame handling\nI0122 14:30:23.005500 2693 log.go:172] (0xc00067c1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0122 14:30:23.009607 2693 log.go:172] (0xc0008d20b0) Data frame received for 3\nI0122 14:30:23.009624 2693 log.go:172] (0xc000a161e0) (3) Data frame handling\nI0122 14:30:23.009638 2693 log.go:172] (0xc000a161e0) (3) Data frame sent\nI0122 14:30:23.164700 2693 log.go:172] (0xc0008d20b0) Data frame received for 1\nI0122 14:30:23.164828 2693 log.go:172] (0xc0008d20b0) (0xc000a161e0) Stream removed, broadcasting: 3\nI0122 14:30:23.164907 2693 log.go:172] (0xc000a16140) (1) Data frame handling\nI0122 14:30:23.164932 2693 log.go:172] (0xc000a16140) (1) Data frame sent\nI0122 14:30:23.165074 2693 log.go:172] (0xc0008d20b0) (0xc00067c1e0) Stream removed, broadcasting: 5\nI0122 14:30:23.165131 2693 log.go:172] (0xc0008d20b0) (0xc000a16140) Stream removed, broadcasting: 1\nI0122 14:30:23.165164 2693 log.go:172] (0xc0008d20b0) Go away received\nI0122 14:30:23.165861 2693 log.go:172] (0xc0008d20b0) (0xc000a16140) Stream removed, broadcasting: 1\nI0122 14:30:23.165887 2693 log.go:172] (0xc0008d20b0) (0xc000a161e0) Stream removed, broadcasting: 3\nI0122 14:30:23.165910 2693 log.go:172] (0xc0008d20b0) (0xc00067c1e0) Stream removed, broadcasting: 5\n" Jan 22 14:30:23.175: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 14:30:23.175: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 14:30:23.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:30:23.716: INFO: stderr: "I0122 14:30:23.350583 2714 log.go:172] (0xc000116a50) (0xc0004ec780) Create stream\nI0122 14:30:23.350762 2714 log.go:172] (0xc000116a50) (0xc0004ec780) Stream added, broadcasting: 1\nI0122 14:30:23.355345 2714 log.go:172] (0xc000116a50) Reply frame received for 1\nI0122 14:30:23.355378 2714 log.go:172] (0xc000116a50) (0xc0009dc000) Create stream\nI0122 14:30:23.355388 2714 log.go:172] (0xc000116a50) (0xc0009dc000) Stream added, broadcasting: 3\nI0122 14:30:23.356500 2714 log.go:172] (0xc000116a50) Reply frame received for 3\nI0122 14:30:23.356526 2714 log.go:172] (0xc000116a50) (0xc00090c000) Create stream\nI0122 14:30:23.356541 2714 log.go:172] (0xc000116a50) (0xc00090c000) Stream added, broadcasting: 5\nI0122 14:30:23.357815 2714 log.go:172] (0xc000116a50) Reply frame received for 5\nI0122 14:30:23.494489 2714 log.go:172] (0xc000116a50) Data frame received for 5\nI0122 14:30:23.494538 2714 log.go:172] (0xc00090c000) (5) Data frame handling\nI0122 14:30:23.494572 2714 log.go:172] (0xc00090c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0122 14:30:23.555626 2714 log.go:172] (0xc000116a50) Data frame received for 3\nI0122 14:30:23.555740 2714 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0122 14:30:23.555753 2714 log.go:172] (0xc0009dc000) (3) Data frame sent\nI0122 14:30:23.555857 2714 log.go:172] (0xc000116a50) Data frame received for 5\nI0122 14:30:23.555889 2714 log.go:172] (0xc00090c000) (5) Data frame handling\nI0122 14:30:23.555910 2714 log.go:172] (0xc00090c000) (5) Data frame sent\nI0122 14:30:23.555926 2714 log.go:172] (0xc000116a50) Data frame received for 5\nI0122 14:30:23.555937 2714 log.go:172] (0xc00090c000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ trueI0122 14:30:23.555960 2714 log.go:172] (0xc00090c000) (5) Data frame sent\nI0122 14:30:23.556225 2714 log.go:172] (0xc000116a50) Data frame received for 5\nI0122 14:30:23.556263 2714 log.go:172] (0xc00090c000) (5) Data frame handling\nI0122 14:30:23.556273 2714 log.go:172] (0xc00090c000) (5) Data frame sent\n\nI0122 14:30:23.706499 2714 log.go:172] (0xc000116a50) Data frame received for 1\nI0122 14:30:23.706779 2714 log.go:172] (0xc0004ec780) (1) Data frame handling\nI0122 14:30:23.706813 2714 log.go:172] (0xc0004ec780) (1) Data frame sent\nI0122 14:30:23.707084 2714 log.go:172] (0xc000116a50) (0xc00090c000) Stream removed, broadcasting: 5\nI0122 14:30:23.707214 2714 log.go:172] (0xc000116a50) (0xc0004ec780) Stream removed, broadcasting: 1\nI0122 14:30:23.707279 2714 log.go:172] (0xc000116a50) (0xc0009dc000) Stream removed, broadcasting: 3\nI0122 14:30:23.707324 2714 log.go:172] (0xc000116a50) Go away received\nI0122 14:30:23.707529 2714 log.go:172] (0xc000116a50) (0xc0004ec780) Stream removed, broadcasting: 1\nI0122 14:30:23.709090 2714 log.go:172] (0xc000116a50) (0xc0009dc000) Stream removed, broadcasting: 3\nI0122 14:30:23.709106 2714 log.go:172] (0xc000116a50) (0xc00090c000) Stream removed, broadcasting: 5\n" Jan 22 14:30:23.717: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 14:30:23.717: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 14:30:23.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:30:24.456: INFO: stderr: "I0122 14:30:23.928109 2734 log.go:172] (0xc00084f3f0) (0xc00083f5e0) Create stream\nI0122 14:30:23.928293 2734 log.go:172] (0xc00084f3f0) (0xc00083f5e0) Stream added, broadcasting: 1\nI0122 14:30:23.945567 2734 log.go:172] (0xc00084f3f0) Reply frame received for 1\nI0122 14:30:23.945646 2734 log.go:172] (0xc00084f3f0) (0xc0001fc820) Create stream\nI0122 14:30:23.945657 2734 log.go:172] (0xc00084f3f0) (0xc0001fc820) Stream added, broadcasting: 3\nI0122 14:30:23.950074 2734 log.go:172] (0xc00084f3f0) Reply frame received for 3\nI0122 14:30:23.950113 2734 log.go:172] (0xc00084f3f0) (0xc0006f0000) Create stream\nI0122 14:30:23.950138 2734 log.go:172] (0xc00084f3f0) (0xc0006f0000) Stream added, broadcasting: 5\nI0122 14:30:23.954276 2734 log.go:172] (0xc00084f3f0) Reply frame received for 5\nI0122 14:30:24.172900 2734 log.go:172] (0xc00084f3f0) Data frame received for 3\nI0122 14:30:24.173033 2734 log.go:172] (0xc0001fc820) (3) Data frame handling\nI0122 14:30:24.173048 2734 log.go:172] (0xc0001fc820) (3) Data frame sent\nI0122 14:30:24.173087 2734 log.go:172] (0xc00084f3f0) Data frame received for 5\nI0122 14:30:24.173094 2734 log.go:172] (0xc0006f0000) (5) Data frame handling\nI0122 14:30:24.173107 2734 log.go:172] (0xc0006f0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0122 14:30:24.446375 2734 log.go:172] (0xc00084f3f0) Data frame received for 1\nI0122 14:30:24.446622 2734 log.go:172] (0xc00084f3f0) (0xc0006f0000) Stream removed, broadcasting: 5\nI0122 14:30:24.446660 2734 log.go:172] (0xc00083f5e0) (1) Data frame handling\nI0122 14:30:24.446674 2734 log.go:172] (0xc00083f5e0) (1) Data frame sent\nI0122 14:30:24.446700 2734 log.go:172] (0xc00084f3f0) (0xc0001fc820) Stream removed, broadcasting: 3\nI0122 14:30:24.446723 2734 log.go:172] (0xc00084f3f0) (0xc00083f5e0) Stream removed, broadcasting: 1\nI0122 14:30:24.447203 2734 log.go:172] (0xc00084f3f0) (0xc00083f5e0) Stream removed, broadcasting: 1\nI0122 14:30:24.447312 2734 log.go:172] (0xc00084f3f0) (0xc0001fc820) Stream removed, broadcasting: 3\nI0122 14:30:24.447419 2734 log.go:172] (0xc00084f3f0) (0xc0006f0000) Stream removed, broadcasting: 5\n" Jan 22 14:30:24.456: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 14:30:24.456: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 14:30:24.470: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 14:30:24.470: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 14:30:24.470: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 22 14:30:24.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 14:30:25.022: INFO: stderr: "I0122 14:30:24.754445 2747 log.go:172] (0xc00083afd0) (0xc0007e32c0) Create stream\nI0122 14:30:24.754816 2747 log.go:172] (0xc00083afd0) (0xc0007e32c0) Stream added, broadcasting: 1\nI0122 14:30:24.775577 2747 log.go:172] (0xc00083afd0) Reply frame received for 1\nI0122 14:30:24.775712 2747 log.go:172] (0xc00083afd0) (0xc000782f00) Create stream\nI0122 14:30:24.775724 2747 log.go:172] (0xc00083afd0) (0xc000782f00) Stream added, broadcasting: 3\nI0122 14:30:24.779173 2747 log.go:172] (0xc00083afd0) Reply frame received for 3\nI0122 14:30:24.779193 2747 log.go:172] (0xc00083afd0) (0xc0005401e0) Create stream\nI0122 14:30:24.779204 2747 log.go:172] (0xc00083afd0) (0xc0005401e0) Stream added, broadcasting: 5\nI0122 14:30:24.782628 2747 log.go:172] (0xc00083afd0) Reply frame received for 5\nI0122 14:30:24.922639 2747 log.go:172] (0xc00083afd0) Data frame received for 5\nI0122 14:30:24.922693 2747 log.go:172] (0xc0005401e0) (5) Data frame handling\nI0122 14:30:24.922702 2747 log.go:172] (0xc0005401e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 14:30:24.922719 2747 log.go:172] (0xc00083afd0) Data frame received for 3\nI0122 14:30:24.922723 2747 log.go:172] (0xc000782f00) (3) Data frame handling\nI0122 14:30:24.922728 2747 log.go:172] (0xc000782f00) (3) Data frame sent\nI0122 14:30:25.018414 2747 log.go:172] (0xc00083afd0) (0xc000782f00) Stream removed, broadcasting: 3\nI0122 14:30:25.018509 2747 log.go:172] (0xc00083afd0) Data frame received for 1\nI0122 14:30:25.018525 2747 log.go:172] (0xc0007e32c0) (1) Data frame handling\nI0122 14:30:25.018581 2747 log.go:172] (0xc0007e32c0) (1) Data frame sent\nI0122 14:30:25.018633 2747 log.go:172] (0xc00083afd0) (0xc0005401e0) Stream removed, broadcasting: 5\nI0122 14:30:25.018674 2747 log.go:172] (0xc00083afd0) (0xc0007e32c0) Stream removed, broadcasting: 1\nI0122 14:30:25.018706 2747 log.go:172] (0xc00083afd0) Go away received\nI0122 14:30:25.019128 2747 log.go:172] (0xc00083afd0) (0xc0007e32c0) Stream removed, broadcasting: 1\nI0122 14:30:25.019179 2747 log.go:172] (0xc00083afd0) (0xc000782f00) Stream removed, broadcasting: 3\nI0122 14:30:25.019209 2747 log.go:172] (0xc00083afd0) (0xc0005401e0) Stream removed, broadcasting: 5\n" Jan 22 14:30:25.024: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 14:30:25.024: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 14:30:25.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 14:30:25.335: INFO: stderr: "I0122 14:30:25.145878 2761 log.go:172] (0xc0009be6e0) (0xc0009a0960) Create stream\nI0122 14:30:25.145989 2761 log.go:172] (0xc0009be6e0) (0xc0009a0960) Stream added, broadcasting: 1\nI0122 14:30:25.149501 2761 log.go:172] (0xc0009be6e0) Reply frame received for 1\nI0122 14:30:25.149553 2761 log.go:172] (0xc0009be6e0) (0xc0009a0000) Create stream\nI0122 14:30:25.149563 2761 log.go:172] (0xc0009be6e0) (0xc0009a0000) Stream added, broadcasting: 3\nI0122 14:30:25.150301 2761 log.go:172] (0xc0009be6e0) Reply frame received for 3\nI0122 14:30:25.150324 2761 log.go:172] (0xc0009be6e0) (0xc000628140) Create stream\nI0122 14:30:25.150331 2761 log.go:172] (0xc0009be6e0) (0xc000628140) Stream added, broadcasting: 5\nI0122 14:30:25.151117 2761 log.go:172] (0xc0009be6e0) Reply frame received for 5\nI0122 14:30:25.221079 2761 log.go:172] (0xc0009be6e0) Data frame received for 5\nI0122 14:30:25.221149 2761 log.go:172] (0xc000628140) (5) Data frame handling\nI0122 14:30:25.221187 2761 log.go:172] (0xc000628140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 14:30:25.250705 2761 log.go:172] (0xc0009be6e0) Data frame received for 3\nI0122 14:30:25.250752 2761 log.go:172] (0xc0009a0000) (3) Data frame handling\nI0122 14:30:25.250767 2761 log.go:172] (0xc0009a0000) (3) Data frame sent\nI0122 14:30:25.330218 2761 log.go:172] (0xc0009be6e0) (0xc0009a0000) Stream removed, broadcasting: 3\nI0122 14:30:25.330294 2761 log.go:172] (0xc0009be6e0) Data frame received for 1\nI0122 14:30:25.330305 2761 log.go:172] (0xc0009be6e0) (0xc000628140) Stream removed, broadcasting: 5\nI0122 14:30:25.330351 2761 log.go:172] (0xc0009a0960) (1) Data frame handling\nI0122 14:30:25.330376 2761 log.go:172] (0xc0009a0960) (1) Data frame sent\nI0122 14:30:25.330387 2761 log.go:172] (0xc0009be6e0) (0xc0009a0960) Stream removed, broadcasting: 1\nI0122 14:30:25.330401 2761 log.go:172] (0xc0009be6e0) Go away received\nI0122 14:30:25.330732 2761 log.go:172] (0xc0009be6e0) (0xc0009a0960) Stream removed, broadcasting: 1\nI0122 14:30:25.330749 2761 log.go:172] (0xc0009be6e0) (0xc0009a0000) Stream removed, broadcasting: 3\nI0122 14:30:25.330762 2761 log.go:172] (0xc0009be6e0) (0xc000628140) Stream removed, broadcasting: 5\n" Jan 22 14:30:25.335: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 14:30:25.335: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 14:30:25.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 14:30:25.904: INFO: stderr: "I0122 14:30:25.616844 2779 log.go:172] (0xc000117080) (0xc0005f4d20) Create stream\nI0122 14:30:25.617098 2779 log.go:172] (0xc000117080) (0xc0005f4d20) Stream added, broadcasting: 1\nI0122 14:30:25.621825 2779 log.go:172] (0xc000117080) Reply frame received for 1\nI0122 14:30:25.621886 2779 log.go:172] (0xc000117080) (0xc000714000) Create stream\nI0122 14:30:25.621900 2779 log.go:172] (0xc000117080) (0xc000714000) Stream added, broadcasting: 3\nI0122 14:30:25.623793 2779 log.go:172] (0xc000117080) Reply frame received for 3\nI0122 14:30:25.623826 2779 log.go:172] (0xc000117080) (0xc00074c000) Create stream\nI0122 14:30:25.623865 2779 log.go:172] (0xc000117080) (0xc00074c000) Stream added, broadcasting: 5\nI0122 14:30:25.625515 2779 log.go:172] (0xc000117080) Reply frame received for 5\nI0122 14:30:25.721350 2779 log.go:172] (0xc000117080) Data frame received for 5\nI0122 14:30:25.721472 2779 log.go:172] (0xc00074c000) (5) Data frame handling\nI0122 14:30:25.721519 2779 log.go:172] (0xc00074c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0122 14:30:25.766036 2779 log.go:172] (0xc000117080) Data frame received for 3\nI0122 14:30:25.766101 2779 log.go:172] (0xc000714000) (3) Data frame handling\nI0122 14:30:25.766132 2779 log.go:172] (0xc000714000) (3) Data frame sent\nI0122 14:30:25.891732 2779 log.go:172] (0xc000117080) Data frame received for 1\nI0122 14:30:25.891932 2779 log.go:172] (0xc000117080) (0xc000714000) Stream removed, broadcasting: 3\nI0122 14:30:25.892045 2779 log.go:172] (0xc0005f4d20) (1) Data frame handling\nI0122 14:30:25.892069 2779 log.go:172] (0xc0005f4d20) (1) Data frame sent\nI0122 14:30:25.892084 2779 log.go:172] (0xc000117080) (0xc0005f4d20) Stream removed, broadcasting: 1\nI0122 14:30:25.892320 2779 log.go:172] (0xc000117080) (0xc00074c000) Stream removed, broadcasting: 5\nI0122 14:30:25.893022 2779 log.go:172] (0xc000117080) Go away received\nI0122 14:30:25.893388 2779 log.go:172] (0xc000117080) (0xc0005f4d20) Stream removed, broadcasting: 1\nI0122 14:30:25.893478 2779 log.go:172] (0xc000117080) (0xc000714000) Stream removed, broadcasting: 3\nI0122 14:30:25.893526 2779 log.go:172] (0xc000117080) (0xc00074c000) Stream removed, broadcasting: 5\n" Jan 22 14:30:25.904: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 14:30:25.904: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 14:30:25.904: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 14:30:25.913: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 22 14:30:35.940: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 22 14:30:35.940: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 22 14:30:35.940: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 22 14:30:36.010: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:36.010: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:36.010: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:36.010: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:36.010: INFO: Jan 22 14:30:36.010: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 14:30:37.727: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:37.727: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:37.727: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:37.727: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:37.728: INFO: Jan 22 14:30:37.728: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 14:30:38.830: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:38.830: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:38.830: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:38.830: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:38.831: INFO: Jan 22 14:30:38.831: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 14:30:40.012: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:40.012: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:40.012: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:40.012: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:40.012: INFO: Jan 22 14:30:40.012: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 14:30:41.027: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:41.027: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:41.027: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:41.027: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:41.027: INFO: Jan 22 14:30:41.027: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 14:30:42.035: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:42.035: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:42.035: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:42.035: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:42.035: INFO: Jan 22 14:30:42.035: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 14:30:43.065: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:43.065: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:43.065: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:43.065: INFO: Jan 22 14:30:43.065: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 22 14:30:44.077: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:44.077: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:44.077: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:44.077: INFO: Jan 22 14:30:44.077: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 22 14:30:45.093: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 14:30:45.094: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:29:39 +0000 UTC }] Jan 22 14:30:45.094: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 14:30:11 +0000 UTC }] Jan 22 14:30:45.094: INFO: Jan 22 14:30:45.094: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9030 Jan 22 14:30:46.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:30:46.384: INFO: rc: 1 Jan 22 14:30:46.384: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001980f30 exit status 1 true [0xc002328b40 0xc002328b80 0xc002328bb0] [0xc002328b40 0xc002328b80 0xc002328bb0] [0xc002328b70 0xc002328ba0] [0xba6c50 0xba6c50] 0xc001609c20 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 22 14:30:56.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:30:56.660: INFO: rc: 1 Jan 22 14:30:56.660: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d6dcb0 exit status 1 true [0xc0010246c0 0xc0010246d8 0xc0010246f0] [0xc0010246c0 0xc0010246d8 0xc0010246f0] [0xc0010246d0 0xc0010246e8] [0xba6c50 0xba6c50] 0xc0023e84e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:31:06.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:31:06.781: INFO: rc: 1 Jan 22 14:31:06.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cfe090 exit status 1 true [0xc000700b20 0xc000700c60 0xc000700de8] [0xc000700b20 0xc000700c60 0xc000700de8] [0xc000700ba0 0xc000700d40] [0xba6c50 0xba6c50] 0xc002cce840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:31:16.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:31:16.941: INFO: rc: 1 Jan 22 14:31:16.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002846090 exit status 1 true [0xc0008bc038 0xc0008bc180 0xc0008bc280] [0xc0008bc038 0xc0008bc180 0xc0008bc280] [0xc0008bc150 0xc0008bc218] [0xba6c50 0xba6c50] 0xc0032e2fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:31:26.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:31:27.112: INFO: rc: 1 Jan 22 14:31:27.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cfe180 exit status 1 true [0xc000700e40 0xc000700f68 0xc000701078] [0xc000700e40 0xc000700f68 0xc000701078] [0xc000700f00 0xc000701048] [0xba6c50 0xba6c50] 0xc002cced20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:31:37.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:31:37.242: INFO: rc: 1 Jan 22 14:31:37.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00203a0c0 exit status 1 true [0xc002dc4000 0xc002dc4018 0xc002dc4030] [0xc002dc4000 0xc002dc4018 0xc002dc4030] [0xc002dc4010 0xc002dc4028] [0xba6c50 0xba6c50] 0xc00263c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:31:47.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:31:47.445: INFO: rc: 1 Jan 22 14:31:47.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00203a1b0 exit status 1 true [0xc002dc4038 0xc002dc4050 0xc002dc4068] [0xc002dc4038 0xc002dc4050 0xc002dc4068] [0xc002dc4048 0xc002dc4060] [0xba6c50 0xba6c50] 0xc00263cc00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:31:57.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:31:57.526: INFO: rc: 1 Jan 22 14:31:57.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002846180 exit status 1 true [0xc0008bc2c8 0xc0008bc388 0xc0008bc500] [0xc0008bc2c8 0xc0008bc388 0xc0008bc500] [0xc0008bc358 0xc0008bc490] [0xba6c50 0xba6c50] 0xc0032e32c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:32:07.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:32:07.671: INFO: rc: 1 Jan 22 14:32:07.671: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00203a2a0 exit status 1 true [0xc002dc4070 0xc002dc4088 0xc002dc40a0] [0xc002dc4070 0xc002dc4088 0xc002dc40a0] [0xc002dc4080 0xc002dc4098] [0xba6c50 0xba6c50] 0xc00263cfc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:32:17.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:32:18.386: INFO: rc: 1 Jan 22 14:32:18.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c88090 exit status 1 true [0xc002f5c000 0xc002f5c038 0xc002f5c078] [0xc002f5c000 0xc002f5c038 0xc002f5c078] [0xc002f5c028 0xc002f5c068] [0xba6c50 0xba6c50] 0xc00299e300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:32:28.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:32:28.614: INFO: rc: 1 Jan 22 14:32:28.614: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002846270 exit status 1 true [0xc0008bc510 0xc0008bc5d8 0xc0008bc738] [0xc0008bc510 0xc0008bc5d8 0xc0008bc738] [0xc0008bc588 0xc0008bc708] [0xba6c50 0xba6c50] 0xc001e762a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:32:38.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:32:38.758: INFO: rc: 1 Jan 22 14:32:38.758: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c88180 exit status 1 true [0xc002f5c090 0xc002f5c0b8 0xc002f5c0e0] [0xc002f5c090 0xc002f5c0b8 0xc002f5c0e0] [0xc002f5c0a0 0xc002f5c0d8] [0xba6c50 0xba6c50] 0xc00299e960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:32:48.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:32:48.929: INFO: rc: 1 Jan 22 14:32:48.929: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cfe2a0 exit status 1 true [0xc0007010e0 0xc000701360 0xc0007014e0] [0xc0007010e0 0xc000701360 0xc0007014e0] [0xc000701258 0xc000701488] [0xba6c50 0xba6c50] 0xc002ccf7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:32:58.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:32:59.067: INFO: rc: 1 Jan 22 14:32:59.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cfe390 exit status 1 true [0xc000701578 0xc0007019f0 0xc000701be8] [0xc000701578 0xc0007019f0 0xc000701be8] [0xc000701830 0xc000701b70] [0xba6c50 0xba6c50] 0xc002ccfbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:33:09.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:33:09.244: INFO: rc: 1 Jan 22 14:33:09.244: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cfe0c0 exit status 1 true [0xc000700b20 0xc000700c60 0xc000700de8] [0xc000700b20 0xc000700c60 0xc000700de8] [0xc000700ba0 0xc000700d40] [0xba6c50 0xba6c50] 0xc002cce840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:33:19.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:33:19.421: INFO: rc: 1 Jan 22 14:33:19.421: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c880c0 exit status 1 true [0xc002f5c000 0xc002f5c038 0xc002f5c078] [0xc002f5c000 0xc002f5c038 0xc002f5c078] [0xc002f5c028 0xc002f5c068] [0xba6c50 0xba6c50] 0xc0032e2fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:33:29.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:33:29.586: INFO: rc: 1 Jan 22 14:33:29.586: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c881e0 exit status 1 true [0xc002f5c090 0xc002f5c0b8 0xc002f5c0e0] [0xc002f5c090 0xc002f5c0b8 0xc002f5c0e0] [0xc002f5c0a0 0xc002f5c0d8] [0xba6c50 0xba6c50] 0xc0032e32c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:33:39.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:33:39.746: INFO: rc: 1 Jan 22 14:33:39.747: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c882a0 exit status 1 true [0xc002f5c0f0 0xc002f5c130 0xc002f5c160] [0xc002f5c0f0 0xc002f5c130 0xc002f5c160] [0xc002f5c118 0xc002f5c148] [0xba6c50 0xba6c50] 0xc00299e120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:33:49.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:33:49.922: INFO: rc: 1 Jan 22 14:33:49.922: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c88390 exit status 1 true [0xc002f5c170 0xc002f5c1b8 0xc002f5c208] [0xc002f5c170 0xc002f5c1b8 0xc002f5c208] [0xc002f5c1a0 0xc002f5c1f0] [0xba6c50 0xba6c50] 0xc00299e4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:33:59.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:34:00.019: INFO: rc: 1 Jan 22 14:34:00.020: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c88480 exit status 1 true [0xc002f5c210 0xc002f5c238 0xc002f5c288] [0xc002f5c210 0xc002f5c238 0xc002f5c288] [0xc002f5c230 0xc002f5c270] [0xba6c50 0xba6c50] 0xc00299eb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:34:10.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:34:10.189: INFO: rc: 1 Jan 22 14:34:10.189: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c88570 exit status 1 true [0xc002f5c2a8 0xc002f5c2d0 0xc002f5c2e8] [0xc002f5c2a8 0xc002f5c2d0 0xc002f5c2e8] [0xc002f5c2c8 0xc002f5c2e0] [0xba6c50 0xba6c50] 0xc00299f1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:34:20.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:34:20.330: INFO: rc: 1 Jan 22 14:34:20.331: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00203a0f0 exit status 1 true [0xc002dc4000 0xc002dc4018 0xc002dc4030] [0xc002dc4000 0xc002dc4018 0xc002dc4030] [0xc002dc4010 0xc002dc4028] [0xba6c50 0xba6c50] 0xc00263c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:34:30.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:34:30.480: INFO: rc: 1 Jan 22 14:34:30.480: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00203a210 exit status 1 true [0xc002dc4038 0xc002dc4050 0xc002dc4068] [0xc002dc4038 0xc002dc4050 0xc002dc4068] [0xc002dc4048 0xc002dc4060] [0xba6c50 0xba6c50] 0xc00263cc00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:34:40.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:34:40.641: INFO: rc: 1 Jan 22 14:34:40.642: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0028460c0 exit status 1 true [0xc0008bc038 0xc0008bc180 0xc0008bc280] [0xc0008bc038 0xc0008bc180 0xc0008bc280] [0xc0008bc150 0xc0008bc218] [0xba6c50 0xba6c50] 0xc001e76780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:34:50.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:34:50.782: INFO: rc: 1 Jan 22 14:34:50.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0028461e0 exit status 1 true [0xc0008bc2c8 0xc0008bc388 0xc0008bc500] [0xc0008bc2c8 0xc0008bc388 0xc0008bc500] [0xc0008bc358 0xc0008bc490] [0xba6c50 0xba6c50] 0xc001e76d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:35:00.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:35:00.949: INFO: rc: 1 Jan 22 14:35:00.949: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0028462d0 exit status 1 true [0xc0008bc510 0xc0008bc5d8 0xc0008bc738] [0xc0008bc510 0xc0008bc5d8 0xc0008bc738] [0xc0008bc588 0xc0008bc708] [0xba6c50 0xba6c50] 0xc001e77500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:35:10.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:35:11.117: INFO: rc: 1 Jan 22 14:35:11.117: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00203a090 exit status 1 true [0xc002dc4008 0xc002dc4020 0xc002dc4038] [0xc002dc4008 0xc002dc4020 0xc002dc4038] [0xc002dc4018 0xc002dc4030] [0xba6c50 0xba6c50] 0xc0032e2fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:35:21.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:35:21.299: INFO: rc: 1 Jan 22 14:35:21.299: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0028460f0 exit status 1 true [0xc0008bc038 0xc0008bc180 0xc0008bc280] [0xc0008bc038 0xc0008bc180 0xc0008bc280] [0xc0008bc150 0xc0008bc218] [0xba6c50 0xba6c50] 0xc00263c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:35:31.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:35:31.488: INFO: rc: 1 Jan 22 14:35:31.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0028461b0 exit status 1 true [0xc0008bc2c8 0xc0008bc388 0xc0008bc500] [0xc0008bc2c8 0xc0008bc388 0xc0008bc500] [0xc0008bc358 0xc0008bc490] [0xba6c50 0xba6c50] 0xc00263cc00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:35:41.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:35:41.685: INFO: rc: 1 Jan 22 14:35:41.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00203a1b0 exit status 1 true [0xc002dc4040 0xc002dc4058 0xc002dc4070] [0xc002dc4040 0xc002dc4058 0xc002dc4070] [0xc002dc4050 0xc002dc4068] [0xba6c50 0xba6c50] 0xc0032e32c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 22 14:35:51.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9030 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 14:35:51.887: INFO: rc: 1 Jan 22 14:35:51.887: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 22 14:35:51.887: INFO: Scaling statefulset ss to 0 Jan 22 14:35:51.916: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 22 14:35:51.919: INFO: Deleting all statefulset in ns statefulset-9030 Jan 22 14:35:51.923: INFO: Scaling statefulset ss to 0 Jan 22 14:35:51.933: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 14:35:51.936: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:35:51.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9030" for this suite. Jan 22 14:35:57.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:35:58.113: INFO: namespace statefulset-9030 deletion completed in 6.147760816s • [SLOW TEST:379.275 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:35:58.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 22 14:36:08.291: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 22 14:36:18.427: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:36:18.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6370" for this suite. Jan 22 14:36:24.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:36:24.601: INFO: namespace pods-6370 deletion completed in 6.16508797s • [SLOW TEST:26.488 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:36:24.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jan 22 14:36:35.248: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3052 pod-service-account-c6c6a4fc-00b8-460e-a886-423e90d4afdf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 22 14:36:35.744: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3052 pod-service-account-c6c6a4fc-00b8-460e-a886-423e90d4afdf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 22 14:36:36.264: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3052 pod-service-account-c6c6a4fc-00b8-460e-a886-423e90d4afdf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:36:36.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3052" for this suite. Jan 22 14:36:42.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:36:42.985: INFO: namespace svcaccounts-3052 deletion completed in 6.244103053s • [SLOW TEST:18.383 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:36:42.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-bc66b562-01be-4c4b-95ab-a54214675e51 STEP: Creating a pod to test consume secrets Jan 22 14:36:43.067: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0" in namespace "projected-4810" to be "success or failure" Jan 22 14:36:43.079: INFO: Pod "pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.240944ms Jan 22 14:36:45.088: INFO: Pod "pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02082002s Jan 22 14:36:47.096: INFO: Pod "pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028937593s Jan 22 14:36:49.119: INFO: Pod "pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051794255s Jan 22 14:36:51.133: INFO: Pod "pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065697685s Jan 22 14:36:53.157: INFO: Pod "pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.09020424s Jan 22 14:36:55.163: INFO: Pod "pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.096061563s STEP: Saw pod success Jan 22 14:36:55.163: INFO: Pod "pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0" satisfied condition "success or failure" Jan 22 14:36:55.166: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0 container projected-secret-volume-test: STEP: delete the pod Jan 22 14:36:55.596: INFO: Waiting for pod pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0 to disappear Jan 22 14:36:55.608: INFO: Pod pod-projected-secrets-94e6c19f-ef36-4467-ae15-3d9c184e25b0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:36:55.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4810" for this suite. Jan 22 14:37:01.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:37:01.867: INFO: namespace projected-4810 deletion completed in 6.250969995s • [SLOW TEST:18.882 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:37:01.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 14:37:02.001: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 22 14:37:03.675: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:37:04.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-119" for this suite. Jan 22 14:37:16.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:37:16.554: INFO: namespace replication-controller-119 deletion completed in 12.483301032s • [SLOW TEST:14.687 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:37:16.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 22 14:37:27.284: INFO: Successfully updated pod "pod-update-282cfbf2-8b95-4bd1-b4d1-e94cecb1aca9" STEP: verifying the updated pod is in kubernetes Jan 22 14:37:27.296: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:37:27.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9998" for this suite. Jan 22 14:37:49.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:37:49.475: INFO: namespace pods-9998 deletion completed in 22.17250895s • [SLOW TEST:32.920 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:37:49.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 14:38:19.600: INFO: Container started at 2020-01-22 14:37:57 +0000 UTC, pod became ready at 2020-01-22 14:38:18 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:38:19.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3591" for this suite. Jan 22 14:38:41.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:38:41.762: INFO: namespace container-probe-3591 deletion completed in 22.154880279s • [SLOW TEST:52.287 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:38:41.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 22 14:38:51.021: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:38:51.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4091" for this suite. Jan 22 14:38:57.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:38:57.279: INFO: namespace container-runtime-4091 deletion completed in 6.17115104s • [SLOW TEST:15.517 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:38:57.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 22 14:38:57.432: INFO: Waiting up to 5m0s for pod "pod-66fdc778-f4e8-4b2a-983c-f432fab3e362" in namespace "emptydir-7570" to be "success or failure" Jan 22 14:38:57.441: INFO: Pod "pod-66fdc778-f4e8-4b2a-983c-f432fab3e362": Phase="Pending", Reason="", readiness=false. Elapsed: 9.233203ms Jan 22 14:38:59.451: INFO: Pod "pod-66fdc778-f4e8-4b2a-983c-f432fab3e362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018720695s Jan 22 14:39:01.510: INFO: Pod "pod-66fdc778-f4e8-4b2a-983c-f432fab3e362": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078438078s Jan 22 14:39:03.521: INFO: Pod "pod-66fdc778-f4e8-4b2a-983c-f432fab3e362": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0894003s Jan 22 14:39:05.533: INFO: Pod "pod-66fdc778-f4e8-4b2a-983c-f432fab3e362": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100944728s Jan 22 14:39:07.544: INFO: Pod "pod-66fdc778-f4e8-4b2a-983c-f432fab3e362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111540083s STEP: Saw pod success Jan 22 14:39:07.544: INFO: Pod "pod-66fdc778-f4e8-4b2a-983c-f432fab3e362" satisfied condition "success or failure" Jan 22 14:39:07.548: INFO: Trying to get logs from node iruya-node pod pod-66fdc778-f4e8-4b2a-983c-f432fab3e362 container test-container: STEP: delete the pod Jan 22 14:39:07.630: INFO: Waiting for pod pod-66fdc778-f4e8-4b2a-983c-f432fab3e362 to disappear Jan 22 14:39:07.635: INFO: Pod pod-66fdc778-f4e8-4b2a-983c-f432fab3e362 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:39:07.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7570" for this suite. Jan 22 14:39:13.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:39:13.907: INFO: namespace emptydir-7570 deletion completed in 6.265509562s • [SLOW TEST:16.627 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:39:13.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 22 14:39:14.028: INFO: namespace kubectl-9961 Jan 22 14:39:14.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9961' Jan 22 14:39:14.414: INFO: stderr: "" Jan 22 14:39:14.414: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 22 14:39:15.424: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:15.424: INFO: Found 0 / 1 Jan 22 14:39:16.422: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:16.422: INFO: Found 0 / 1 Jan 22 14:39:17.424: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:17.424: INFO: Found 0 / 1 Jan 22 14:39:18.423: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:18.423: INFO: Found 0 / 1 Jan 22 14:39:19.425: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:19.425: INFO: Found 0 / 1 Jan 22 14:39:20.428: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:20.428: INFO: Found 0 / 1 Jan 22 14:39:21.429: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:21.429: INFO: Found 0 / 1 Jan 22 14:39:22.423: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:22.423: INFO: Found 0 / 1 Jan 22 14:39:23.424: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:23.424: INFO: Found 1 / 1 Jan 22 14:39:23.424: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 22 14:39:23.428: INFO: Selector matched 1 pods for map[app:redis] Jan 22 14:39:23.429: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 22 14:39:23.429: INFO: wait on redis-master startup in kubectl-9961 Jan 22 14:39:23.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rdvgh redis-master --namespace=kubectl-9961' Jan 22 14:39:23.640: INFO: stderr: "" Jan 22 14:39:23.640: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jan 14:39:21.974 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jan 14:39:21.974 # Server started, Redis version 3.2.12\n1:M 22 Jan 14:39:21.975 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jan 14:39:21.975 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 22 14:39:23.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9961' Jan 22 14:39:24.061: INFO: stderr: "" Jan 22 14:39:24.061: INFO: stdout: "service/rm2 exposed\n" Jan 22 14:39:24.081: INFO: Service rm2 in namespace kubectl-9961 found. STEP: exposing service Jan 22 14:39:26.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9961' Jan 22 14:39:26.487: INFO: stderr: "" Jan 22 14:39:26.487: INFO: stdout: "service/rm3 exposed\n" Jan 22 14:39:26.498: INFO: Service rm3 in namespace kubectl-9961 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:39:28.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9961" for this suite. Jan 22 14:39:52.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:39:52.678: INFO: namespace kubectl-9961 deletion completed in 24.153918846s • [SLOW TEST:38.771 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:39:52.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-0dcb80e0-f4c9-44a7-a112-fe535cfbd98f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-0dcb80e0-f4c9-44a7-a112-fe535cfbd98f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:40:02.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9484" for this suite. Jan 22 14:40:25.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:40:25.161: INFO: namespace projected-9484 deletion completed in 22.166690186s • [SLOW TEST:32.482 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:40:25.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 14:40:25.250: INFO: Creating ReplicaSet my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2 Jan 22 14:40:25.261: INFO: Pod name my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2: Found 0 pods out of 1 Jan 22 14:40:30.273: INFO: Pod name my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2: Found 1 pods out of 1 Jan 22 14:40:30.273: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2" is running Jan 22 14:40:34.283: INFO: Pod "my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2-mtx7p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 14:40:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 14:40:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 14:40:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 14:40:25 +0000 UTC Reason: Message:}]) Jan 22 14:40:34.283: INFO: Trying to dial the pod Jan 22 14:40:39.330: INFO: Controller my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2: Got expected result from replica 1 [my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2-mtx7p]: "my-hostname-basic-559bb75c-c6e3-4a34-95d2-bf7dff9aa4d2-mtx7p", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:40:39.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-321" for this suite. Jan 22 14:40:45.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:40:45.530: INFO: namespace replicaset-321 deletion completed in 6.190562778s • [SLOW TEST:20.368 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:40:45.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 22 14:40:45.672: INFO: Number of nodes with available pods: 0 Jan 22 14:40:45.672: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:47.434: INFO: Number of nodes with available pods: 0 Jan 22 14:40:47.434: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:47.886: INFO: Number of nodes with available pods: 0 Jan 22 14:40:47.886: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:48.873: INFO: Number of nodes with available pods: 0 Jan 22 14:40:48.873: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:49.689: INFO: Number of nodes with available pods: 0 Jan 22 14:40:49.689: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:50.690: INFO: Number of nodes with available pods: 0 Jan 22 14:40:50.690: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:53.356: INFO: Number of nodes with available pods: 0 Jan 22 14:40:53.357: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:53.685: INFO: Number of nodes with available pods: 0 Jan 22 14:40:53.685: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:54.689: INFO: Number of nodes with available pods: 0 Jan 22 14:40:54.689: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:55.690: INFO: Number of nodes with available pods: 0 Jan 22 14:40:55.690: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:56.695: INFO: Number of nodes with available pods: 1 Jan 22 14:40:56.695: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:40:57.719: INFO: Number of nodes with available pods: 2 Jan 22 14:40:57.719: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 22 14:40:57.804: INFO: Number of nodes with available pods: 2 Jan 22 14:40:57.804: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6509, will wait for the garbage collector to delete the pods Jan 22 14:40:58.944: INFO: Deleting DaemonSet.extensions daemon-set took: 13.475568ms Jan 22 14:40:59.345: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.367704ms Jan 22 14:41:07.959: INFO: Number of nodes with available pods: 0 Jan 22 14:41:07.959: INFO: Number of running nodes: 0, number of available pods: 0 Jan 22 14:41:07.964: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6509/daemonsets","resourceVersion":"21446596"},"items":null} Jan 22 14:41:07.967: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6509/pods","resourceVersion":"21446596"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:41:07.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6509" for this suite. Jan 22 14:41:14.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:41:14.138: INFO: namespace daemonsets-6509 deletion completed in 6.155432241s • [SLOW TEST:28.608 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:41:14.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jan 22 14:41:14.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3559' Jan 22 14:41:16.482: INFO: stderr: "" Jan 22 14:41:16.482: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 14:41:16.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3559' Jan 22 14:41:16.801: INFO: stderr: "" Jan 22 14:41:16.801: INFO: stdout: "update-demo-nautilus-2hl4f update-demo-nautilus-sgjgs " Jan 22 14:41:16.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hl4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:41:16.939: INFO: stderr: "" Jan 22 14:41:16.939: INFO: stdout: "" Jan 22 14:41:16.939: INFO: update-demo-nautilus-2hl4f is created but not running Jan 22 14:41:21.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3559' Jan 22 14:41:22.085: INFO: stderr: "" Jan 22 14:41:22.085: INFO: stdout: "update-demo-nautilus-2hl4f update-demo-nautilus-sgjgs " Jan 22 14:41:22.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hl4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:41:22.202: INFO: stderr: "" Jan 22 14:41:22.202: INFO: stdout: "" Jan 22 14:41:22.202: INFO: update-demo-nautilus-2hl4f is created but not running Jan 22 14:41:27.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3559' Jan 22 14:41:27.313: INFO: stderr: "" Jan 22 14:41:27.313: INFO: stdout: "update-demo-nautilus-2hl4f update-demo-nautilus-sgjgs " Jan 22 14:41:27.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hl4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:41:27.382: INFO: stderr: "" Jan 22 14:41:27.382: INFO: stdout: "true" Jan 22 14:41:27.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hl4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:41:27.493: INFO: stderr: "" Jan 22 14:41:27.493: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:41:27.493: INFO: validating pod update-demo-nautilus-2hl4f Jan 22 14:41:27.535: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:41:27.536: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:41:27.536: INFO: update-demo-nautilus-2hl4f is verified up and running Jan 22 14:41:27.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgjgs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:41:27.620: INFO: stderr: "" Jan 22 14:41:27.621: INFO: stdout: "" Jan 22 14:41:27.621: INFO: update-demo-nautilus-sgjgs is created but not running Jan 22 14:41:32.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3559' Jan 22 14:41:32.754: INFO: stderr: "" Jan 22 14:41:32.754: INFO: stdout: "update-demo-nautilus-2hl4f update-demo-nautilus-sgjgs " Jan 22 14:41:32.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hl4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:41:32.873: INFO: stderr: "" Jan 22 14:41:32.873: INFO: stdout: "true" Jan 22 14:41:32.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hl4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:41:33.024: INFO: stderr: "" Jan 22 14:41:33.024: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:41:33.024: INFO: validating pod update-demo-nautilus-2hl4f Jan 22 14:41:33.030: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:41:33.031: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:41:33.031: INFO: update-demo-nautilus-2hl4f is verified up and running Jan 22 14:41:33.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgjgs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:41:33.140: INFO: stderr: "" Jan 22 14:41:33.140: INFO: stdout: "true" Jan 22 14:41:33.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgjgs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:41:33.255: INFO: stderr: "" Jan 22 14:41:33.255: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:41:33.255: INFO: validating pod update-demo-nautilus-sgjgs Jan 22 14:41:33.269: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:41:33.269: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:41:33.269: INFO: update-demo-nautilus-sgjgs is verified up and running STEP: rolling-update to new replication controller Jan 22 14:41:33.271: INFO: scanned /root for discovery docs: Jan 22 14:41:33.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3559' Jan 22 14:42:05.039: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 22 14:42:05.040: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 14:42:05.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3559' Jan 22 14:42:05.185: INFO: stderr: "" Jan 22 14:42:05.185: INFO: stdout: "update-demo-kitten-j969q update-demo-kitten-qrg7g update-demo-nautilus-2hl4f " STEP: Replicas for name=update-demo: expected=2 actual=3 Jan 22 14:42:10.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3559' Jan 22 14:42:10.354: INFO: stderr: "" Jan 22 14:42:10.354: INFO: stdout: "update-demo-kitten-j969q update-demo-kitten-qrg7g " Jan 22 14:42:10.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j969q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:42:10.531: INFO: stderr: "" Jan 22 14:42:10.531: INFO: stdout: "true" Jan 22 14:42:10.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j969q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:42:10.626: INFO: stderr: "" Jan 22 14:42:10.627: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 22 14:42:10.627: INFO: validating pod update-demo-kitten-j969q Jan 22 14:42:10.674: INFO: got data: { "image": "kitten.jpg" } Jan 22 14:42:10.674: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 22 14:42:10.674: INFO: update-demo-kitten-j969q is verified up and running Jan 22 14:42:10.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qrg7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:42:10.753: INFO: stderr: "" Jan 22 14:42:10.753: INFO: stdout: "true" Jan 22 14:42:10.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qrg7g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3559' Jan 22 14:42:10.860: INFO: stderr: "" Jan 22 14:42:10.861: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 22 14:42:10.861: INFO: validating pod update-demo-kitten-qrg7g Jan 22 14:42:10.897: INFO: got data: { "image": "kitten.jpg" } Jan 22 14:42:10.897: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 22 14:42:10.897: INFO: update-demo-kitten-qrg7g is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:42:10.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3559" for this suite. Jan 22 14:42:39.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:42:39.150: INFO: namespace kubectl-3559 deletion completed in 28.24746228s • [SLOW TEST:85.012 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:42:39.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:42:49.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4578" for this suite. Jan 22 14:43:37.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:43:37.502: INFO: namespace kubelet-test-4578 deletion completed in 48.156069631s • [SLOW TEST:58.351 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:43:37.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:44:31.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3624" for this suite. Jan 22 14:44:37.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:44:37.648: INFO: namespace container-runtime-3624 deletion completed in 6.196120001s • [SLOW TEST:60.145 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:44:37.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-mlld STEP: Creating a pod to test atomic-volume-subpath Jan 22 14:44:37.852: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mlld" in namespace "subpath-2150" to be "success or failure" Jan 22 14:44:37.902: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Pending", Reason="", readiness=false. Elapsed: 50.616235ms Jan 22 14:44:39.918: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065681098s Jan 22 14:44:41.926: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073903537s Jan 22 14:44:43.942: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089640726s Jan 22 14:44:45.951: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098960646s Jan 22 14:44:47.965: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 10.112987164s Jan 22 14:44:49.975: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 12.122690911s Jan 22 14:44:51.983: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 14.13108333s Jan 22 14:44:53.999: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 16.147267297s Jan 22 14:44:56.008: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 18.156432477s Jan 22 14:44:58.027: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 20.175048691s Jan 22 14:45:00.036: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 22.183852207s Jan 22 14:45:02.119: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 24.266738827s Jan 22 14:45:04.125: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 26.273581934s Jan 22 14:45:06.132: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Running", Reason="", readiness=true. Elapsed: 28.280454408s Jan 22 14:45:08.140: INFO: Pod "pod-subpath-test-projected-mlld": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.288151241s STEP: Saw pod success Jan 22 14:45:08.140: INFO: Pod "pod-subpath-test-projected-mlld" satisfied condition "success or failure" Jan 22 14:45:08.144: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-mlld container test-container-subpath-projected-mlld: STEP: delete the pod Jan 22 14:45:08.292: INFO: Waiting for pod pod-subpath-test-projected-mlld to disappear Jan 22 14:45:08.303: INFO: Pod pod-subpath-test-projected-mlld no longer exists STEP: Deleting pod pod-subpath-test-projected-mlld Jan 22 14:45:08.304: INFO: Deleting pod "pod-subpath-test-projected-mlld" in namespace "subpath-2150" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:45:08.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2150" for this suite. Jan 22 14:45:14.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:45:14.596: INFO: namespace subpath-2150 deletion completed in 6.249794373s • [SLOW TEST:36.947 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:45:14.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 14:45:14.718: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d" in namespace "downward-api-2022" to be "success or failure" Jan 22 14:45:14.735: INFO: Pod "downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.474824ms Jan 22 14:45:16.751: INFO: Pod "downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033095097s Jan 22 14:45:18.757: INFO: Pod "downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039239207s Jan 22 14:45:20.769: INFO: Pod "downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051789435s Jan 22 14:45:22.794: INFO: Pod "downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076097184s Jan 22 14:45:24.799: INFO: Pod "downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08099189s STEP: Saw pod success Jan 22 14:45:24.799: INFO: Pod "downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d" satisfied condition "success or failure" Jan 22 14:45:24.802: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d container client-container: STEP: delete the pod Jan 22 14:45:24.943: INFO: Waiting for pod downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d to disappear Jan 22 14:45:25.097: INFO: Pod downwardapi-volume-8f02be5a-2a9f-4d34-a952-acfffb7a318d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:45:25.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2022" for this suite. Jan 22 14:45:31.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:45:31.253: INFO: namespace downward-api-2022 deletion completed in 6.146907542s • [SLOW TEST:16.657 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:45:31.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 22 14:48:34.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:34.584: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:36.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:36.605: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:38.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:38.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:40.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:40.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:42.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:42.600: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:44.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:44.601: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:46.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:46.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:48.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:48.597: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:50.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:50.598: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:52.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:52.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:54.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:54.598: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:56.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:56.598: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:48:58.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:48:58.600: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:00.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:00.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:02.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:02.599: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:04.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:04.595: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:06.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:06.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:08.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:08.597: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:10.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:10.603: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:12.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:12.607: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:14.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:14.598: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:16.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:16.590: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:18.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:18.610: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:20.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:20.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:22.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:22.606: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:24.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:24.598: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:26.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:26.606: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:28.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:28.604: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:30.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:30.606: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:32.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:32.606: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:34.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:34.603: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:36.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:36.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:38.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:38.597: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:40.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:40.606: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:42.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:42.603: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:44.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:44.597: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:46.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:46.602: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:48.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:48.601: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:50.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:50.607: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:52.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:52.597: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:54.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:54.594: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:56.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:56.598: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:49:58.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:49:58.603: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:50:00.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:50:00.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:50:02.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:50:02.886: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:50:04.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:50:04.596: INFO: Pod pod-with-poststart-exec-hook still exists Jan 22 14:50:06.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 22 14:50:06.676: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:50:06.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4416" for this suite. Jan 22 14:50:28.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:50:28.813: INFO: namespace container-lifecycle-hook-4416 deletion completed in 22.127053254s • [SLOW TEST:297.559 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:50:28.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 22 14:50:28.949: INFO: Waiting up to 5m0s for pod "pod-fc026672-7c40-41d5-9462-356364a92c85" in namespace "emptydir-1003" to be "success or failure" Jan 22 14:50:28.955: INFO: Pod "pod-fc026672-7c40-41d5-9462-356364a92c85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147737ms Jan 22 14:50:30.967: INFO: Pod "pod-fc026672-7c40-41d5-9462-356364a92c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018152033s Jan 22 14:50:32.985: INFO: Pod "pod-fc026672-7c40-41d5-9462-356364a92c85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035906378s Jan 22 14:50:34.992: INFO: Pod "pod-fc026672-7c40-41d5-9462-356364a92c85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042503186s Jan 22 14:50:37.006: INFO: Pod "pod-fc026672-7c40-41d5-9462-356364a92c85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056667519s Jan 22 14:50:39.014: INFO: Pod "pod-fc026672-7c40-41d5-9462-356364a92c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065236057s STEP: Saw pod success Jan 22 14:50:39.014: INFO: Pod "pod-fc026672-7c40-41d5-9462-356364a92c85" satisfied condition "success or failure" Jan 22 14:50:39.018: INFO: Trying to get logs from node iruya-node pod pod-fc026672-7c40-41d5-9462-356364a92c85 container test-container: STEP: delete the pod Jan 22 14:50:39.166: INFO: Waiting for pod pod-fc026672-7c40-41d5-9462-356364a92c85 to disappear Jan 22 14:50:39.175: INFO: Pod pod-fc026672-7c40-41d5-9462-356364a92c85 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:50:39.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1003" for this suite. Jan 22 14:50:45.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:50:45.360: INFO: namespace emptydir-1003 deletion completed in 6.173782782s • [SLOW TEST:16.546 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:50:45.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-fbbd3ca3-324d-488d-ad52-09b8a10bf4bd STEP: Creating a pod to test consume secrets Jan 22 14:50:45.517: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb" in namespace "projected-3698" to be "success or failure" Jan 22 14:50:45.537: INFO: Pod "pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.838759ms Jan 22 14:50:47.547: INFO: Pod "pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029646437s Jan 22 14:50:49.605: INFO: Pod "pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088397536s Jan 22 14:50:51.616: INFO: Pod "pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099250686s Jan 22 14:50:53.630: INFO: Pod "pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112781284s Jan 22 14:50:55.638: INFO: Pod "pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120917326s STEP: Saw pod success Jan 22 14:50:55.638: INFO: Pod "pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb" satisfied condition "success or failure" Jan 22 14:50:55.643: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb container secret-volume-test: STEP: delete the pod Jan 22 14:50:55.709: INFO: Waiting for pod pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb to disappear Jan 22 14:50:55.808: INFO: Pod pod-projected-secrets-73ffa39b-0a6a-4b1d-9f2c-ddaba33500fb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:50:55.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3698" for this suite. Jan 22 14:51:01.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:51:02.089: INFO: namespace projected-3698 deletion completed in 6.272919496s • [SLOW TEST:16.729 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:51:02.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 22 14:51:02.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8865' Jan 22 14:51:02.828: INFO: stderr: "" Jan 22 14:51:02.828: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 14:51:02.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8865' Jan 22 14:51:02.938: INFO: stderr: "" Jan 22 14:51:02.938: INFO: stdout: "update-demo-nautilus-glf28 update-demo-nautilus-sv5bj " Jan 22 14:51:02.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:03.130: INFO: stderr: "" Jan 22 14:51:03.130: INFO: stdout: "" Jan 22 14:51:03.130: INFO: update-demo-nautilus-glf28 is created but not running Jan 22 14:51:08.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8865' Jan 22 14:51:09.680: INFO: stderr: "" Jan 22 14:51:09.680: INFO: stdout: "update-demo-nautilus-glf28 update-demo-nautilus-sv5bj " Jan 22 14:51:09.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:10.142: INFO: stderr: "" Jan 22 14:51:10.142: INFO: stdout: "" Jan 22 14:51:10.142: INFO: update-demo-nautilus-glf28 is created but not running Jan 22 14:51:15.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8865' Jan 22 14:51:15.572: INFO: stderr: "" Jan 22 14:51:15.572: INFO: stdout: "update-demo-nautilus-glf28 update-demo-nautilus-sv5bj " Jan 22 14:51:15.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:15.673: INFO: stderr: "" Jan 22 14:51:15.673: INFO: stdout: "true" Jan 22 14:51:15.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:17.179: INFO: stderr: "" Jan 22 14:51:17.179: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:51:17.179: INFO: validating pod update-demo-nautilus-glf28 Jan 22 14:51:17.188: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:51:17.188: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:51:17.188: INFO: update-demo-nautilus-glf28 is verified up and running Jan 22 14:51:17.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sv5bj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:17.276: INFO: stderr: "" Jan 22 14:51:17.276: INFO: stdout: "true" Jan 22 14:51:17.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sv5bj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:17.384: INFO: stderr: "" Jan 22 14:51:17.384: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:51:17.384: INFO: validating pod update-demo-nautilus-sv5bj Jan 22 14:51:17.400: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:51:17.400: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:51:17.400: INFO: update-demo-nautilus-sv5bj is verified up and running STEP: scaling down the replication controller Jan 22 14:51:17.414: INFO: scanned /root for discovery docs: Jan 22 14:51:17.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8865' Jan 22 14:51:18.622: INFO: stderr: "" Jan 22 14:51:18.622: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 14:51:18.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8865' Jan 22 14:51:18.786: INFO: stderr: "" Jan 22 14:51:18.786: INFO: stdout: "update-demo-nautilus-glf28 update-demo-nautilus-sv5bj " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 22 14:51:23.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8865' Jan 22 14:51:23.940: INFO: stderr: "" Jan 22 14:51:23.941: INFO: stdout: "update-demo-nautilus-glf28 update-demo-nautilus-sv5bj " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 22 14:51:28.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8865' Jan 22 14:51:29.150: INFO: stderr: "" Jan 22 14:51:29.150: INFO: stdout: "update-demo-nautilus-glf28 " Jan 22 14:51:29.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:29.274: INFO: stderr: "" Jan 22 14:51:29.274: INFO: stdout: "true" Jan 22 14:51:29.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:29.373: INFO: stderr: "" Jan 22 14:51:29.373: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:51:29.373: INFO: validating pod update-demo-nautilus-glf28 Jan 22 14:51:29.379: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:51:29.379: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:51:29.379: INFO: update-demo-nautilus-glf28 is verified up and running STEP: scaling up the replication controller Jan 22 14:51:29.381: INFO: scanned /root for discovery docs: Jan 22 14:51:29.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8865' Jan 22 14:51:30.583: INFO: stderr: "" Jan 22 14:51:30.583: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 14:51:30.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8865' Jan 22 14:51:30.720: INFO: stderr: "" Jan 22 14:51:30.720: INFO: stdout: "update-demo-nautilus-glf28 update-demo-nautilus-tp4gg " Jan 22 14:51:30.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:30.826: INFO: stderr: "" Jan 22 14:51:30.826: INFO: stdout: "true" Jan 22 14:51:30.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:30.918: INFO: stderr: "" Jan 22 14:51:30.918: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:51:30.918: INFO: validating pod update-demo-nautilus-glf28 Jan 22 14:51:30.922: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:51:30.922: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:51:30.922: INFO: update-demo-nautilus-glf28 is verified up and running Jan 22 14:51:30.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp4gg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:31.030: INFO: stderr: "" Jan 22 14:51:31.030: INFO: stdout: "" Jan 22 14:51:31.030: INFO: update-demo-nautilus-tp4gg is created but not running Jan 22 14:51:36.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8865' Jan 22 14:51:36.191: INFO: stderr: "" Jan 22 14:51:36.191: INFO: stdout: "update-demo-nautilus-glf28 update-demo-nautilus-tp4gg " Jan 22 14:51:36.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:36.376: INFO: stderr: "" Jan 22 14:51:36.376: INFO: stdout: "true" Jan 22 14:51:36.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:36.508: INFO: stderr: "" Jan 22 14:51:36.508: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:51:36.508: INFO: validating pod update-demo-nautilus-glf28 Jan 22 14:51:36.516: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:51:36.516: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:51:36.516: INFO: update-demo-nautilus-glf28 is verified up and running Jan 22 14:51:36.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp4gg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:36.592: INFO: stderr: "" Jan 22 14:51:36.592: INFO: stdout: "" Jan 22 14:51:36.592: INFO: update-demo-nautilus-tp4gg is created but not running Jan 22 14:51:41.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8865' Jan 22 14:51:41.730: INFO: stderr: "" Jan 22 14:51:41.730: INFO: stdout: "update-demo-nautilus-glf28 update-demo-nautilus-tp4gg " Jan 22 14:51:41.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:41.867: INFO: stderr: "" Jan 22 14:51:41.867: INFO: stdout: "true" Jan 22 14:51:41.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-glf28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:42.009: INFO: stderr: "" Jan 22 14:51:42.009: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:51:42.009: INFO: validating pod update-demo-nautilus-glf28 Jan 22 14:51:42.030: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:51:42.030: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:51:42.030: INFO: update-demo-nautilus-glf28 is verified up and running Jan 22 14:51:42.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp4gg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:42.178: INFO: stderr: "" Jan 22 14:51:42.178: INFO: stdout: "true" Jan 22 14:51:42.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp4gg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8865' Jan 22 14:51:42.286: INFO: stderr: "" Jan 22 14:51:42.286: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 14:51:42.286: INFO: validating pod update-demo-nautilus-tp4gg Jan 22 14:51:42.303: INFO: got data: { "image": "nautilus.jpg" } Jan 22 14:51:42.303: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 14:51:42.303: INFO: update-demo-nautilus-tp4gg is verified up and running STEP: using delete to clean up resources Jan 22 14:51:42.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8865' Jan 22 14:51:42.433: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 14:51:42.433: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 22 14:51:42.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8865' Jan 22 14:51:42.622: INFO: stderr: "No resources found.\n" Jan 22 14:51:42.622: INFO: stdout: "" Jan 22 14:51:42.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8865 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 14:51:42.752: INFO: stderr: "" Jan 22 14:51:42.752: INFO: stdout: "update-demo-nautilus-glf28\nupdate-demo-nautilus-tp4gg\n" Jan 22 14:51:43.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8865' Jan 22 14:51:43.522: INFO: stderr: "No resources found.\n" Jan 22 14:51:43.522: INFO: stdout: "" Jan 22 14:51:43.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8865 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 14:51:43.721: INFO: stderr: "" Jan 22 14:51:43.721: INFO: stdout: "update-demo-nautilus-glf28\nupdate-demo-nautilus-tp4gg\n" Jan 22 14:51:43.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8865' Jan 22 14:51:43.953: INFO: stderr: "No resources found.\n" Jan 22 14:51:43.954: INFO: stdout: "" Jan 22 14:51:43.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8865 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 14:51:44.201: INFO: stderr: "" Jan 22 14:51:44.201: INFO: stdout: "update-demo-nautilus-glf28\nupdate-demo-nautilus-tp4gg\n" Jan 22 14:51:44.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8865' Jan 22 14:51:44.392: INFO: stderr: "No resources found.\n" Jan 22 14:51:44.392: INFO: stdout: "" Jan 22 14:51:44.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8865 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 14:51:44.574: INFO: stderr: "" Jan 22 14:51:44.575: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:51:44.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8865" for this suite. Jan 22 14:52:07.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:52:07.511: INFO: namespace kubectl-8865 deletion completed in 22.927566366s • [SLOW TEST:65.421 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:52:07.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3651/configmap-test-3149b5ea-9331-4d01-b7de-62a72a10b742 STEP: Creating a pod to test consume configMaps Jan 22 14:52:07.614: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d" in namespace "configmap-3651" to be "success or failure" Jan 22 14:52:07.648: INFO: Pod "pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.329344ms Jan 22 14:52:09.656: INFO: Pod "pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04207759s Jan 22 14:52:11.663: INFO: Pod "pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049327941s Jan 22 14:52:13.673: INFO: Pod "pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059242741s Jan 22 14:52:15.681: INFO: Pod "pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067483849s Jan 22 14:52:17.690: INFO: Pod "pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075843679s STEP: Saw pod success Jan 22 14:52:17.690: INFO: Pod "pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d" satisfied condition "success or failure" Jan 22 14:52:17.694: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d container env-test: STEP: delete the pod Jan 22 14:52:18.019: INFO: Waiting for pod pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d to disappear Jan 22 14:52:18.027: INFO: Pod pod-configmaps-b3acae56-697d-4d34-b901-fd440b9f511d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:52:18.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3651" for this suite. Jan 22 14:52:24.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:52:24.175: INFO: namespace configmap-3651 deletion completed in 6.1393492s • [SLOW TEST:16.663 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:52:24.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 22 14:52:32.901: INFO: Successfully updated pod "annotationupdate6e60da62-b430-4b8b-abc9-3070e4a84c22" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:52:37.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5443" for this suite. Jan 22 14:52:59.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:52:59.352: INFO: namespace downward-api-5443 deletion completed in 22.340626191s • [SLOW TEST:35.177 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:52:59.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-f0efdf34-b67a-46bd-bb4b-f41148117341 in namespace container-probe-7996 Jan 22 14:53:07.577: INFO: Started pod busybox-f0efdf34-b67a-46bd-bb4b-f41148117341 in namespace container-probe-7996 STEP: checking the pod's current state and verifying that restartCount is present Jan 22 14:53:07.581: INFO: Initial restart count of pod busybox-f0efdf34-b67a-46bd-bb4b-f41148117341 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:57:09.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7996" for this suite. Jan 22 14:57:15.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:57:15.466: INFO: namespace container-probe-7996 deletion completed in 6.170944172s • [SLOW TEST:256.113 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:57:15.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 14:57:15.605: INFO: Create a RollingUpdate DaemonSet Jan 22 14:57:15.614: INFO: Check that daemon pods launch on every node of the cluster Jan 22 14:57:15.638: INFO: Number of nodes with available pods: 0 Jan 22 14:57:15.639: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:17.513: INFO: Number of nodes with available pods: 0 Jan 22 14:57:17.513: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:18.035: INFO: Number of nodes with available pods: 0 Jan 22 14:57:18.035: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:18.660: INFO: Number of nodes with available pods: 0 Jan 22 14:57:18.660: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:19.658: INFO: Number of nodes with available pods: 0 Jan 22 14:57:19.658: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:20.659: INFO: Number of nodes with available pods: 0 Jan 22 14:57:20.659: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:21.986: INFO: Number of nodes with available pods: 0 Jan 22 14:57:21.986: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:22.888: INFO: Number of nodes with available pods: 0 Jan 22 14:57:22.888: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:23.904: INFO: Number of nodes with available pods: 0 Jan 22 14:57:23.904: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:24.702: INFO: Number of nodes with available pods: 0 Jan 22 14:57:24.702: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:25.671: INFO: Number of nodes with available pods: 0 Jan 22 14:57:25.671: INFO: Node iruya-node is running more than one daemon pod Jan 22 14:57:26.676: INFO: Number of nodes with available pods: 2 Jan 22 14:57:26.677: INFO: Number of running nodes: 2, number of available pods: 2 Jan 22 14:57:26.677: INFO: Update the DaemonSet to trigger a rollout Jan 22 14:57:26.692: INFO: Updating DaemonSet daemon-set Jan 22 14:57:39.075: INFO: Roll back the DaemonSet before rollout is complete Jan 22 14:57:39.103: INFO: Updating DaemonSet daemon-set Jan 22 14:57:39.103: INFO: Make sure DaemonSet rollback is complete Jan 22 14:57:39.395: INFO: Wrong image for pod: daemon-set-85hvz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 22 14:57:39.395: INFO: Pod daemon-set-85hvz is not available Jan 22 14:57:40.439: INFO: Wrong image for pod: daemon-set-85hvz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 22 14:57:40.439: INFO: Pod daemon-set-85hvz is not available Jan 22 14:57:41.439: INFO: Wrong image for pod: daemon-set-85hvz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 22 14:57:41.439: INFO: Pod daemon-set-85hvz is not available Jan 22 14:57:43.188: INFO: Wrong image for pod: daemon-set-85hvz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 22 14:57:43.188: INFO: Pod daemon-set-85hvz is not available Jan 22 14:57:44.441: INFO: Pod daemon-set-fsmfk is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6258, will wait for the garbage collector to delete the pods Jan 22 14:57:44.525: INFO: Deleting DaemonSet.extensions daemon-set took: 14.995954ms Jan 22 14:57:45.325: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.499666ms Jan 22 14:57:56.631: INFO: Number of nodes with available pods: 0 Jan 22 14:57:56.631: INFO: Number of running nodes: 0, number of available pods: 0 Jan 22 14:57:56.635: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6258/daemonsets","resourceVersion":"21448554"},"items":null} Jan 22 14:57:56.638: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6258/pods","resourceVersion":"21448554"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:57:56.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6258" for this suite. Jan 22 14:58:04.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:58:04.928: INFO: namespace daemonsets-6258 deletion completed in 8.269014018s • [SLOW TEST:49.461 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:58:04.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 22 14:58:05.097: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3311,SelfLink:/api/v1/namespaces/watch-3311/configmaps/e2e-watch-test-label-changed,UID:03fdecb0-e162-460d-b9c2-18add561d9c1,ResourceVersion:21448599,Generation:0,CreationTimestamp:2020-01-22 14:58:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 22 14:58:05.098: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3311,SelfLink:/api/v1/namespaces/watch-3311/configmaps/e2e-watch-test-label-changed,UID:03fdecb0-e162-460d-b9c2-18add561d9c1,ResourceVersion:21448600,Generation:0,CreationTimestamp:2020-01-22 14:58:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 22 14:58:05.098: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3311,SelfLink:/api/v1/namespaces/watch-3311/configmaps/e2e-watch-test-label-changed,UID:03fdecb0-e162-460d-b9c2-18add561d9c1,ResourceVersion:21448601,Generation:0,CreationTimestamp:2020-01-22 14:58:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 22 14:58:15.175: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3311,SelfLink:/api/v1/namespaces/watch-3311/configmaps/e2e-watch-test-label-changed,UID:03fdecb0-e162-460d-b9c2-18add561d9c1,ResourceVersion:21448616,Generation:0,CreationTimestamp:2020-01-22 14:58:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 22 14:58:15.176: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3311,SelfLink:/api/v1/namespaces/watch-3311/configmaps/e2e-watch-test-label-changed,UID:03fdecb0-e162-460d-b9c2-18add561d9c1,ResourceVersion:21448617,Generation:0,CreationTimestamp:2020-01-22 14:58:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 22 14:58:15.176: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3311,SelfLink:/api/v1/namespaces/watch-3311/configmaps/e2e-watch-test-label-changed,UID:03fdecb0-e162-460d-b9c2-18add561d9c1,ResourceVersion:21448618,Generation:0,CreationTimestamp:2020-01-22 14:58:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:58:15.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3311" for this suite. Jan 22 14:58:21.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:58:21.390: INFO: namespace watch-3311 deletion completed in 6.174190969s • [SLOW TEST:16.462 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:58:21.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-a8e986b0-0e6d-452a-8892-26cecc68d317 STEP: Creating secret with name s-test-opt-upd-fba6ad23-472d-4072-ac6b-451f3c708be6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a8e986b0-0e6d-452a-8892-26cecc68d317 STEP: Updating secret s-test-opt-upd-fba6ad23-472d-4072-ac6b-451f3c708be6 STEP: Creating secret with name s-test-opt-create-cb97b324-3636-4bc4-8575-3fd9fa3b4eef STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:58:35.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-923" for this suite. Jan 22 14:58:57.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:58:58.024: INFO: namespace projected-923 deletion completed in 22.126051318s • [SLOW TEST:36.633 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:58:58.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 22 14:58:58.119: INFO: Waiting up to 5m0s for pod "downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3" in namespace "downward-api-8596" to be "success or failure" Jan 22 14:58:58.126: INFO: Pod "downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.102431ms Jan 22 14:59:00.141: INFO: Pod "downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021710466s Jan 22 14:59:02.156: INFO: Pod "downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036240085s Jan 22 14:59:04.163: INFO: Pod "downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043610736s Jan 22 14:59:06.179: INFO: Pod "downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059797953s Jan 22 14:59:08.187: INFO: Pod "downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067548331s STEP: Saw pod success Jan 22 14:59:08.187: INFO: Pod "downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3" satisfied condition "success or failure" Jan 22 14:59:08.191: INFO: Trying to get logs from node iruya-node pod downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3 container dapi-container: STEP: delete the pod Jan 22 14:59:08.292: INFO: Waiting for pod downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3 to disappear Jan 22 14:59:08.303: INFO: Pod downward-api-5114fc29-cfdf-46a8-b1b3-dab7fc88d3e3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 14:59:08.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8596" for this suite. Jan 22 14:59:14.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 14:59:14.503: INFO: namespace downward-api-8596 deletion completed in 6.182728497s • [SLOW TEST:16.479 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 14:59:14.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-0ba0a429-37eb-48e8-85b7-787eaecd0f3d STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0ba0a429-37eb-48e8-85b7-787eaecd0f3d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:00:28.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-225" for this suite. Jan 22 15:00:52.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:00:52.492: INFO: namespace configmap-225 deletion completed in 24.251050631s • [SLOW TEST:97.989 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:00:52.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3653 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 22 15:00:52.598: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 22 15:01:32.821: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3653 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 15:01:32.821: INFO: >>> kubeConfig: /root/.kube/config I0122 15:01:32.884064 9 log.go:172] (0xc00072bef0) (0xc00096e6e0) Create stream I0122 15:01:32.884112 9 log.go:172] (0xc00072bef0) (0xc00096e6e0) Stream added, broadcasting: 1 I0122 15:01:32.892855 9 log.go:172] (0xc00072bef0) Reply frame received for 1 I0122 15:01:32.892911 9 log.go:172] (0xc00072bef0) (0xc00324e0a0) Create stream I0122 15:01:32.892924 9 log.go:172] (0xc00072bef0) (0xc00324e0a0) Stream added, broadcasting: 3 I0122 15:01:32.894700 9 log.go:172] (0xc00072bef0) Reply frame received for 3 I0122 15:01:32.894789 9 log.go:172] (0xc00072bef0) (0xc0001137c0) Create stream I0122 15:01:32.894800 9 log.go:172] (0xc00072bef0) (0xc0001137c0) Stream added, broadcasting: 5 I0122 15:01:32.896487 9 log.go:172] (0xc00072bef0) Reply frame received for 5 I0122 15:01:33.044271 9 log.go:172] (0xc00072bef0) Data frame received for 3 I0122 15:01:33.044332 9 log.go:172] (0xc00324e0a0) (3) Data frame handling I0122 15:01:33.044365 9 log.go:172] (0xc00324e0a0) (3) Data frame sent I0122 15:01:33.181437 9 log.go:172] (0xc00072bef0) Data frame received for 1 I0122 15:01:33.181654 9 log.go:172] (0xc00072bef0) (0xc0001137c0) Stream removed, broadcasting: 5 I0122 15:01:33.181707 9 log.go:172] (0xc00096e6e0) (1) Data frame handling I0122 15:01:33.181728 9 log.go:172] (0xc00096e6e0) (1) Data frame sent I0122 15:01:33.181805 9 log.go:172] (0xc00072bef0) (0xc00324e0a0) Stream removed, broadcasting: 3 I0122 15:01:33.181880 9 log.go:172] (0xc00072bef0) (0xc00096e6e0) Stream removed, broadcasting: 1 I0122 15:01:33.182146 9 log.go:172] (0xc00072bef0) (0xc00096e6e0) Stream removed, broadcasting: 1 I0122 15:01:33.182162 9 log.go:172] (0xc00072bef0) (0xc00324e0a0) Stream removed, broadcasting: 3 I0122 15:01:33.182172 9 log.go:172] (0xc00072bef0) (0xc0001137c0) Stream removed, broadcasting: 5 Jan 22 15:01:33.182: INFO: Waiting for endpoints: map[] I0122 15:01:33.182291 9 log.go:172] (0xc00072bef0) Go away received Jan 22 15:01:33.192: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3653 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 15:01:33.192: INFO: >>> kubeConfig: /root/.kube/config I0122 15:01:33.306661 9 log.go:172] (0xc00158ed10) (0xc00324e820) Create stream I0122 15:01:33.306693 9 log.go:172] (0xc00158ed10) (0xc00324e820) Stream added, broadcasting: 1 I0122 15:01:33.313090 9 log.go:172] (0xc00158ed10) Reply frame received for 1 I0122 15:01:33.313131 9 log.go:172] (0xc00158ed10) (0xc000113f40) Create stream I0122 15:01:33.313139 9 log.go:172] (0xc00158ed10) (0xc000113f40) Stream added, broadcasting: 3 I0122 15:01:33.314930 9 log.go:172] (0xc00158ed10) Reply frame received for 3 I0122 15:01:33.314955 9 log.go:172] (0xc00158ed10) (0xc00096ea00) Create stream I0122 15:01:33.314963 9 log.go:172] (0xc00158ed10) (0xc00096ea00) Stream added, broadcasting: 5 I0122 15:01:33.316239 9 log.go:172] (0xc00158ed10) Reply frame received for 5 I0122 15:01:33.422770 9 log.go:172] (0xc00158ed10) Data frame received for 3 I0122 15:01:33.422832 9 log.go:172] (0xc000113f40) (3) Data frame handling I0122 15:01:33.422861 9 log.go:172] (0xc000113f40) (3) Data frame sent I0122 15:01:33.596016 9 log.go:172] (0xc00158ed10) Data frame received for 1 I0122 15:01:33.596309 9 log.go:172] (0xc00158ed10) (0xc000113f40) Stream removed, broadcasting: 3 I0122 15:01:33.596410 9 log.go:172] (0xc00324e820) (1) Data frame handling I0122 15:01:33.596449 9 log.go:172] (0xc00324e820) (1) Data frame sent I0122 15:01:33.596541 9 log.go:172] (0xc00158ed10) (0xc00096ea00) Stream removed, broadcasting: 5 I0122 15:01:33.596618 9 log.go:172] (0xc00158ed10) (0xc00324e820) Stream removed, broadcasting: 1 I0122 15:01:33.596647 9 log.go:172] (0xc00158ed10) Go away received I0122 15:01:33.596988 9 log.go:172] (0xc00158ed10) (0xc00324e820) Stream removed, broadcasting: 1 I0122 15:01:33.597040 9 log.go:172] (0xc00158ed10) (0xc000113f40) Stream removed, broadcasting: 3 I0122 15:01:33.597080 9 log.go:172] (0xc00158ed10) (0xc00096ea00) Stream removed, broadcasting: 5 Jan 22 15:01:33.597: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:01:33.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3653" for this suite. Jan 22 15:01:57.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:01:57.821: INFO: namespace pod-network-test-3653 deletion completed in 24.209217071s • [SLOW TEST:65.328 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:01:57.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 22 15:01:58.177: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 22 15:02:03.191: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:02:04.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5574" for this suite. Jan 22 15:02:10.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:02:10.385: INFO: namespace replication-controller-5574 deletion completed in 6.150730226s • [SLOW TEST:12.564 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:02:10.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 15:02:10.758: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cf457a9b-c3fc-4ddf-bed4-bca5d1810b13", Controller:(*bool)(0xc00258a0c2), BlockOwnerDeletion:(*bool)(0xc00258a0c3)}} Jan 22 15:02:10.803: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3f2e36f6-4230-433b-9962-025574ef18c1", Controller:(*bool)(0xc00275734a), BlockOwnerDeletion:(*bool)(0xc00275734b)}} Jan 22 15:02:10.823: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a272b265-b6d4-4761-ac9d-f43764d9c3dc", Controller:(*bool)(0xc002c96292), BlockOwnerDeletion:(*bool)(0xc002c96293)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:02:16.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2862" for this suite. Jan 22 15:02:22.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:02:22.251: INFO: namespace gc-2862 deletion completed in 6.239507604s • [SLOW TEST:11.866 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:02:22.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-9a2d44b8-5e09-4856-a5f5-fb7acab226f5 in namespace container-probe-5495 Jan 22 15:02:32.510: INFO: Started pod liveness-9a2d44b8-5e09-4856-a5f5-fb7acab226f5 in namespace container-probe-5495 STEP: checking the pod's current state and verifying that restartCount is present Jan 22 15:02:32.516: INFO: Initial restart count of pod liveness-9a2d44b8-5e09-4856-a5f5-fb7acab226f5 is 0 Jan 22 15:02:46.609: INFO: Restart count of pod container-probe-5495/liveness-9a2d44b8-5e09-4856-a5f5-fb7acab226f5 is now 1 (14.092755343s elapsed) Jan 22 15:03:06.714: INFO: Restart count of pod container-probe-5495/liveness-9a2d44b8-5e09-4856-a5f5-fb7acab226f5 is now 2 (34.197108596s elapsed) Jan 22 15:03:26.828: INFO: Restart count of pod container-probe-5495/liveness-9a2d44b8-5e09-4856-a5f5-fb7acab226f5 is now 3 (54.311524307s elapsed) Jan 22 15:03:46.949: INFO: Restart count of pod container-probe-5495/liveness-9a2d44b8-5e09-4856-a5f5-fb7acab226f5 is now 4 (1m14.432689978s elapsed) Jan 22 15:04:49.279: INFO: Restart count of pod container-probe-5495/liveness-9a2d44b8-5e09-4856-a5f5-fb7acab226f5 is now 5 (2m16.762308353s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:04:49.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5495" for this suite. Jan 22 15:04:55.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:04:55.537: INFO: namespace container-probe-5495 deletion completed in 6.201476455s • [SLOW TEST:153.285 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:04:55.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:05:07.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7087" for this suite. Jan 22 15:05:13.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:05:13.848: INFO: namespace kubelet-test-7087 deletion completed in 6.174041386s • [SLOW TEST:18.311 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:05:13.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9669.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9669.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9669.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9669.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9669.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9669.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 22 15:05:28.026: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9669/dns-test-46923863-45f0-41e3-ac69-d8c914083779: the server could not find the requested resource (get pods dns-test-46923863-45f0-41e3-ac69-d8c914083779) Jan 22 15:05:28.032: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9669/dns-test-46923863-45f0-41e3-ac69-d8c914083779: the server could not find the requested resource (get pods dns-test-46923863-45f0-41e3-ac69-d8c914083779) Jan 22 15:05:28.036: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9669.svc.cluster.local from pod dns-9669/dns-test-46923863-45f0-41e3-ac69-d8c914083779: the server could not find the requested resource (get pods dns-test-46923863-45f0-41e3-ac69-d8c914083779) Jan 22 15:05:28.048: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9669/dns-test-46923863-45f0-41e3-ac69-d8c914083779: the server could not find the requested resource (get pods dns-test-46923863-45f0-41e3-ac69-d8c914083779) Jan 22 15:05:28.053: INFO: Unable to read jessie_udp@PodARecord from pod dns-9669/dns-test-46923863-45f0-41e3-ac69-d8c914083779: the server could not find the requested resource (get pods dns-test-46923863-45f0-41e3-ac69-d8c914083779) Jan 22 15:05:28.063: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9669/dns-test-46923863-45f0-41e3-ac69-d8c914083779: the server could not find the requested resource (get pods dns-test-46923863-45f0-41e3-ac69-d8c914083779) Jan 22 15:05:28.063: INFO: Lookups using dns-9669/dns-test-46923863-45f0-41e3-ac69-d8c914083779 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9669.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 22 15:05:33.130: INFO: DNS probes using dns-9669/dns-test-46923863-45f0-41e3-ac69-d8c914083779 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:05:33.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9669" for this suite. Jan 22 15:05:39.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:05:39.440: INFO: namespace dns-9669 deletion completed in 6.180093457s • [SLOW TEST:25.589 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:05:39.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 22 15:05:50.205: INFO: Successfully updated pod "labelsupdate07b1b0f7-bade-473c-81de-52dae3f5274c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:05:52.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2081" for this suite. Jan 22 15:06:14.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:06:14.523: INFO: namespace downward-api-2081 deletion completed in 22.178423999s • [SLOW TEST:35.083 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:06:14.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 22 15:06:26.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-f20fdd01-1b1e-4d50-af21-48143c9986f5 -c busybox-main-container --namespace=emptydir-1203 -- cat /usr/share/volumeshare/shareddata.txt' Jan 22 15:06:28.960: INFO: stderr: "I0122 15:06:28.678914 4703 log.go:172] (0xc0006d4370) (0xc0006da820) Create stream\nI0122 15:06:28.678974 4703 log.go:172] (0xc0006d4370) (0xc0006da820) Stream added, broadcasting: 1\nI0122 15:06:28.686756 4703 log.go:172] (0xc0006d4370) Reply frame received for 1\nI0122 15:06:28.686869 4703 log.go:172] (0xc0006d4370) (0xc00059a1e0) Create stream\nI0122 15:06:28.686885 4703 log.go:172] (0xc0006d4370) (0xc00059a1e0) Stream added, broadcasting: 3\nI0122 15:06:28.689977 4703 log.go:172] (0xc0006d4370) Reply frame received for 3\nI0122 15:06:28.690031 4703 log.go:172] (0xc0006d4370) (0xc0006da8c0) Create stream\nI0122 15:06:28.690041 4703 log.go:172] (0xc0006d4370) (0xc0006da8c0) Stream added, broadcasting: 5\nI0122 15:06:28.692418 4703 log.go:172] (0xc0006d4370) Reply frame received for 5\nI0122 15:06:28.796333 4703 log.go:172] (0xc0006d4370) Data frame received for 3\nI0122 15:06:28.796412 4703 log.go:172] (0xc00059a1e0) (3) Data frame handling\nI0122 15:06:28.796439 4703 log.go:172] (0xc00059a1e0) (3) Data frame sent\nI0122 15:06:28.952214 4703 log.go:172] (0xc0006d4370) Data frame received for 1\nI0122 15:06:28.952352 4703 log.go:172] (0xc0006da820) (1) Data frame handling\nI0122 15:06:28.952388 4703 log.go:172] (0xc0006da820) (1) Data frame sent\nI0122 15:06:28.952485 4703 log.go:172] (0xc0006d4370) (0xc0006da820) Stream removed, broadcasting: 1\nI0122 15:06:28.952579 4703 log.go:172] (0xc0006d4370) (0xc00059a1e0) Stream removed, broadcasting: 3\nI0122 15:06:28.952611 4703 log.go:172] (0xc0006d4370) (0xc0006da8c0) Stream removed, broadcasting: 5\nI0122 15:06:28.952709 4703 log.go:172] (0xc0006d4370) Go away received\nI0122 15:06:28.953030 4703 log.go:172] (0xc0006d4370) (0xc0006da820) Stream removed, broadcasting: 1\nI0122 15:06:28.953039 4703 log.go:172] (0xc0006d4370) (0xc00059a1e0) Stream removed, broadcasting: 3\nI0122 15:06:28.953047 4703 log.go:172] (0xc0006d4370) (0xc0006da8c0) Stream removed, broadcasting: 5\n" Jan 22 15:06:28.960: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:06:28.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1203" for this suite. Jan 22 15:06:35.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:06:35.169: INFO: namespace emptydir-1203 deletion completed in 6.192230856s • [SLOW TEST:20.644 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:06:35.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 22 15:06:35.274: INFO: Waiting up to 5m0s for pod "pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6" in namespace "emptydir-8623" to be "success or failure" Jan 22 15:06:35.311: INFO: Pod "pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 37.094667ms Jan 22 15:06:37.319: INFO: Pod "pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045359782s Jan 22 15:06:39.327: INFO: Pod "pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053220068s Jan 22 15:06:41.341: INFO: Pod "pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06728705s Jan 22 15:06:43.360: INFO: Pod "pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085793938s Jan 22 15:06:45.373: INFO: Pod "pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098654206s Jan 22 15:06:47.386: INFO: Pod "pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.112190588s STEP: Saw pod success Jan 22 15:06:47.386: INFO: Pod "pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6" satisfied condition "success or failure" Jan 22 15:06:47.392: INFO: Trying to get logs from node iruya-node pod pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6 container test-container: STEP: delete the pod Jan 22 15:06:47.629: INFO: Waiting for pod pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6 to disappear Jan 22 15:06:47.726: INFO: Pod pod-47bb475f-9335-4d0a-bf86-65ac964f9ef6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:06:47.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8623" for this suite. Jan 22 15:06:53.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:06:53.949: INFO: namespace emptydir-8623 deletion completed in 6.208726222s • [SLOW TEST:18.779 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:06:53.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jan 22 15:06:54.027: INFO: Waiting up to 5m0s for pod "client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511" in namespace "containers-8266" to be "success or failure" Jan 22 15:06:54.037: INFO: Pod "client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019331ms Jan 22 15:06:56.046: INFO: Pod "client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01879144s Jan 22 15:06:58.054: INFO: Pod "client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026552599s Jan 22 15:07:00.066: INFO: Pod "client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03825191s Jan 22 15:07:02.074: INFO: Pod "client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047052872s STEP: Saw pod success Jan 22 15:07:02.074: INFO: Pod "client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511" satisfied condition "success or failure" Jan 22 15:07:02.078: INFO: Trying to get logs from node iruya-node pod client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511 container test-container: STEP: delete the pod Jan 22 15:07:02.136: INFO: Waiting for pod client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511 to disappear Jan 22 15:07:02.140: INFO: Pod client-containers-9d715052-5dd0-4e64-98de-f4f8219fd511 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:07:02.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8266" for this suite. Jan 22 15:07:08.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:07:08.344: INFO: namespace containers-8266 deletion completed in 6.199456885s • [SLOW TEST:14.392 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:07:08.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:07:16.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2607" for this suite. Jan 22 15:07:22.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:07:22.792: INFO: namespace emptydir-wrapper-2607 deletion completed in 6.155448885s • [SLOW TEST:14.446 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:07:22.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 15:07:22.927: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 22 15:07:22.943: INFO: Number of nodes with available pods: 0 Jan 22 15:07:22.943: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 22 15:07:22.986: INFO: Number of nodes with available pods: 0 Jan 22 15:07:22.986: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:23.995: INFO: Number of nodes with available pods: 0 Jan 22 15:07:23.995: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:24.994: INFO: Number of nodes with available pods: 0 Jan 22 15:07:24.994: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:26.010: INFO: Number of nodes with available pods: 0 Jan 22 15:07:26.010: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:27.018: INFO: Number of nodes with available pods: 0 Jan 22 15:07:27.018: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:27.995: INFO: Number of nodes with available pods: 0 Jan 22 15:07:27.995: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:28.995: INFO: Number of nodes with available pods: 0 Jan 22 15:07:28.995: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:29.993: INFO: Number of nodes with available pods: 0 Jan 22 15:07:29.993: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:30.997: INFO: Number of nodes with available pods: 1 Jan 22 15:07:30.997: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 22 15:07:31.063: INFO: Number of nodes with available pods: 1 Jan 22 15:07:31.063: INFO: Number of running nodes: 0, number of available pods: 1 Jan 22 15:07:32.071: INFO: Number of nodes with available pods: 0 Jan 22 15:07:32.071: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 22 15:07:32.097: INFO: Number of nodes with available pods: 0 Jan 22 15:07:32.097: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:33.106: INFO: Number of nodes with available pods: 0 Jan 22 15:07:33.106: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:34.104: INFO: Number of nodes with available pods: 0 Jan 22 15:07:34.104: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:35.104: INFO: Number of nodes with available pods: 0 Jan 22 15:07:35.104: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:36.108: INFO: Number of nodes with available pods: 0 Jan 22 15:07:36.108: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:37.166: INFO: Number of nodes with available pods: 0 Jan 22 15:07:37.166: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:38.106: INFO: Number of nodes with available pods: 0 Jan 22 15:07:38.106: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:39.115: INFO: Number of nodes with available pods: 0 Jan 22 15:07:39.115: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:40.105: INFO: Number of nodes with available pods: 0 Jan 22 15:07:40.105: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:41.112: INFO: Number of nodes with available pods: 0 Jan 22 15:07:41.112: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:42.107: INFO: Number of nodes with available pods: 0 Jan 22 15:07:42.107: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:43.107: INFO: Number of nodes with available pods: 0 Jan 22 15:07:43.107: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:44.111: INFO: Number of nodes with available pods: 0 Jan 22 15:07:44.111: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:45.122: INFO: Number of nodes with available pods: 0 Jan 22 15:07:45.122: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:46.107: INFO: Number of nodes with available pods: 0 Jan 22 15:07:46.107: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:47.108: INFO: Number of nodes with available pods: 0 Jan 22 15:07:47.108: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:49.335: INFO: Number of nodes with available pods: 0 Jan 22 15:07:49.336: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:50.105: INFO: Number of nodes with available pods: 0 Jan 22 15:07:50.105: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:51.106: INFO: Number of nodes with available pods: 0 Jan 22 15:07:51.106: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:52.106: INFO: Number of nodes with available pods: 0 Jan 22 15:07:52.106: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:53.103: INFO: Number of nodes with available pods: 0 Jan 22 15:07:53.103: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:54.104: INFO: Number of nodes with available pods: 0 Jan 22 15:07:54.104: INFO: Node iruya-node is running more than one daemon pod Jan 22 15:07:55.106: INFO: Number of nodes with available pods: 1 Jan 22 15:07:55.106: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6530, will wait for the garbage collector to delete the pods Jan 22 15:07:55.204: INFO: Deleting DaemonSet.extensions daemon-set took: 12.727166ms Jan 22 15:07:55.505: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.504658ms Jan 22 15:08:06.622: INFO: Number of nodes with available pods: 0 Jan 22 15:08:06.623: INFO: Number of running nodes: 0, number of available pods: 0 Jan 22 15:08:06.627: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6530/daemonsets","resourceVersion":"21449935"},"items":null} Jan 22 15:08:06.632: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6530/pods","resourceVersion":"21449935"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:08:06.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6530" for this suite. Jan 22 15:08:12.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:08:12.843: INFO: namespace daemonsets-6530 deletion completed in 6.154944799s • [SLOW TEST:50.050 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:08:12.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0122 15:08:23.736298 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 22 15:08:23.736: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:08:23.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5820" for this suite. Jan 22 15:08:29.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:08:29.915: INFO: namespace gc-5820 deletion completed in 6.168665641s • [SLOW TEST:17.072 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:08:29.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jan 22 15:08:30.083: INFO: Waiting up to 5m0s for pod "var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2" in namespace "var-expansion-4014" to be "success or failure" Jan 22 15:08:30.100: INFO: Pod "var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.728823ms Jan 22 15:08:32.107: INFO: Pod "var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024093655s Jan 22 15:08:34.122: INFO: Pod "var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039260026s Jan 22 15:08:36.131: INFO: Pod "var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047393276s Jan 22 15:08:38.138: INFO: Pod "var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054938033s Jan 22 15:08:40.147: INFO: Pod "var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063681414s STEP: Saw pod success Jan 22 15:08:40.147: INFO: Pod "var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2" satisfied condition "success or failure" Jan 22 15:08:40.150: INFO: Trying to get logs from node iruya-node pod var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2 container dapi-container: STEP: delete the pod Jan 22 15:08:40.278: INFO: Waiting for pod var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2 to disappear Jan 22 15:08:40.306: INFO: Pod var-expansion-83a37664-9b4d-47f0-be35-ac565f7e24f2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:08:40.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4014" for this suite. Jan 22 15:08:46.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:08:46.461: INFO: namespace var-expansion-4014 deletion completed in 6.145391148s • [SLOW TEST:16.545 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:08:46.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 22 15:08:53.191: INFO: 0 pods remaining Jan 22 15:08:53.191: INFO: 0 pods has nil DeletionTimestamp Jan 22 15:08:53.191: INFO: STEP: Gathering metrics W0122 15:08:54.340479 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 22 15:08:54.340: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:08:54.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7527" for this suite. Jan 22 15:09:06.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:09:06.509: INFO: namespace gc-7527 deletion completed in 12.153184062s • [SLOW TEST:20.047 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:09:06.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 22 15:09:15.227: INFO: Successfully updated pod "annotationupdate3dc4e1e5-6261-4d4d-b5d5-eca30599fd4c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:09:17.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8055" for this suite. Jan 22 15:09:39.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:09:39.543: INFO: namespace projected-8055 deletion completed in 22.153312516s • [SLOW TEST:33.034 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:09:39.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 15:09:39.630: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 22 15:09:49.677: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 22 15:09:57.767: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2321,SelfLink:/apis/apps/v1/namespaces/deployment-2321/deployments/test-cleanup-deployment,UID:d49480d5-696b-430a-a5ab-78cbd6a1ee35,ResourceVersion:21450332,Generation:1,CreationTimestamp:2020-01-22 15:09:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-22 15:09:49 +0000 UTC 2020-01-22 15:09:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-22 15:09:57 +0000 UTC 2020-01-22 15:09:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 22 15:09:57.774: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2321,SelfLink:/apis/apps/v1/namespaces/deployment-2321/replicasets/test-cleanup-deployment-55bbcbc84c,UID:5284fbb9-5941-4c6c-98a8-f5f15d88b8f8,ResourceVersion:21450321,Generation:1,CreationTimestamp:2020-01-22 15:09:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment d49480d5-696b-430a-a5ab-78cbd6a1ee35 0xc000b45a77 0xc000b45a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 22 15:09:57.787: INFO: Pod "test-cleanup-deployment-55bbcbc84c-zcr97" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-zcr97,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2321,SelfLink:/api/v1/namespaces/deployment-2321/pods/test-cleanup-deployment-55bbcbc84c-zcr97,UID:47282453-09d3-4da0-9d89-d9b70bb4df7e,ResourceVersion:21450320,Generation:0,CreationTimestamp:2020-01-22 15:09:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 5284fbb9-5941-4c6c-98a8-f5f15d88b8f8 0xc001cc1297 0xc001cc1298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hnv8b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hnv8b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hnv8b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cc1330} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cc1350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 15:09:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 15:09:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 15:09:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 15:09:49 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-22 15:09:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-22 15:09:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://099381e884dd1eca07e0f8269ec676aced9fc3239f05a44cda94d10d8853430c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:09:57.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2321" for this suite. Jan 22 15:10:03.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:10:03.956: INFO: namespace deployment-2321 deletion completed in 6.156731548s • [SLOW TEST:24.413 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:10:03.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-920.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-920.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 22 15:10:20.242: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603: the server could not find the requested resource (get pods dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603) Jan 22 15:10:20.261: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603: the server could not find the requested resource (get pods dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603) Jan 22 15:10:20.268: INFO: Unable to read wheezy_udp@PodARecord from pod dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603: the server could not find the requested resource (get pods dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603) Jan 22 15:10:20.274: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603: the server could not find the requested resource (get pods dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603) Jan 22 15:10:20.279: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603: the server could not find the requested resource (get pods dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603) Jan 22 15:10:20.285: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603: the server could not find the requested resource (get pods dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603) Jan 22 15:10:20.296: INFO: Unable to read jessie_udp@PodARecord from pod dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603: the server could not find the requested resource (get pods dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603) Jan 22 15:10:20.302: INFO: Unable to read jessie_tcp@PodARecord from pod dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603: the server could not find the requested resource (get pods dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603) Jan 22 15:10:20.302: INFO: Lookups using dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 22 15:10:25.385: INFO: DNS probes using dns-920/dns-test-55ba24a7-fba2-40e4-afe3-8a8fc92a5603 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:10:25.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-920" for this suite. Jan 22 15:10:31.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:10:31.765: INFO: namespace dns-920 deletion completed in 6.220216976s • [SLOW TEST:27.809 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:10:31.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4322 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 22 15:10:32.220: INFO: Found 0 stateful pods, waiting for 3 Jan 22 15:10:42.716: INFO: Found 2 stateful pods, waiting for 3 Jan 22 15:10:52.228: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 15:10:52.228: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 15:10:52.228: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 22 15:11:02.248: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 15:11:02.248: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 15:11:02.248: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 22 15:11:02.300: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 22 15:11:12.679: INFO: Updating stateful set ss2 Jan 22 15:11:12.758: INFO: Waiting for Pod statefulset-4322/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 15:11:22.781: INFO: Waiting for Pod statefulset-4322/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 22 15:11:33.167: INFO: Found 2 stateful pods, waiting for 3 Jan 22 15:11:43.733: INFO: Found 2 stateful pods, waiting for 3 Jan 22 15:11:53.180: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 15:11:53.181: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 15:11:53.181: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 22 15:11:53.216: INFO: Updating stateful set ss2 Jan 22 15:11:53.266: INFO: Waiting for Pod statefulset-4322/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 15:12:03.918: INFO: Updating stateful set ss2 Jan 22 15:12:04.015: INFO: Waiting for StatefulSet statefulset-4322/ss2 to complete update Jan 22 15:12:04.015: INFO: Waiting for Pod statefulset-4322/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 15:12:14.030: INFO: Waiting for StatefulSet statefulset-4322/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 22 15:12:24.027: INFO: Deleting all statefulset in ns statefulset-4322 Jan 22 15:12:24.031: INFO: Scaling statefulset ss2 to 0 Jan 22 15:13:04.071: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 15:13:04.076: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:13:04.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4322" for this suite. Jan 22 15:13:12.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:13:12.347: INFO: namespace statefulset-4322 deletion completed in 8.173401956s • [SLOW TEST:160.581 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:13:12.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 22 15:13:12.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a" in namespace "downward-api-1066" to be "success or failure" Jan 22 15:13:12.502: INFO: Pod "downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.225025ms Jan 22 15:13:14.515: INFO: Pod "downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050754059s Jan 22 15:13:16.533: INFO: Pod "downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069204347s Jan 22 15:13:18.553: INFO: Pod "downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088220905s Jan 22 15:13:20.639: INFO: Pod "downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a": Phase="Running", Reason="", readiness=true. Elapsed: 8.17436187s Jan 22 15:13:22.655: INFO: Pod "downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a": Phase="Running", Reason="", readiness=true. Elapsed: 10.190246572s Jan 22 15:13:24.670: INFO: Pod "downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.206198137s STEP: Saw pod success Jan 22 15:13:24.671: INFO: Pod "downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a" satisfied condition "success or failure" Jan 22 15:13:24.676: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a container client-container: STEP: delete the pod Jan 22 15:13:24.736: INFO: Waiting for pod downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a to disappear Jan 22 15:13:24.741: INFO: Pod downwardapi-volume-45400c27-fa02-4ea5-9241-c3c04756969a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:13:24.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1066" for this suite. Jan 22 15:13:30.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:13:30.914: INFO: namespace downward-api-1066 deletion completed in 6.164968415s • [SLOW TEST:18.567 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:13:30.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:14:01.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2516" for this suite. Jan 22 15:14:07.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:14:07.421: INFO: namespace namespaces-2516 deletion completed in 6.14686372s STEP: Destroying namespace "nsdeletetest-8816" for this suite. Jan 22 15:14:07.424: INFO: Namespace nsdeletetest-8816 was already deleted STEP: Destroying namespace "nsdeletetest-7087" for this suite. Jan 22 15:14:13.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:14:13.613: INFO: namespace nsdeletetest-7087 deletion completed in 6.189444036s • [SLOW TEST:42.699 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:14:13.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-25347ec6-d05b-4813-ab33-ce2211141ff8 STEP: Creating a pod to test consume configMaps Jan 22 15:14:13.710: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67" in namespace "projected-3787" to be "success or failure" Jan 22 15:14:13.747: INFO: Pod "pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67": Phase="Pending", Reason="", readiness=false. Elapsed: 36.936367ms Jan 22 15:14:15.788: INFO: Pod "pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077887222s Jan 22 15:14:17.809: INFO: Pod "pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098525747s Jan 22 15:14:19.828: INFO: Pod "pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117878984s Jan 22 15:14:21.836: INFO: Pod "pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126161337s Jan 22 15:14:23.853: INFO: Pod "pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143304111s STEP: Saw pod success Jan 22 15:14:23.854: INFO: Pod "pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67" satisfied condition "success or failure" Jan 22 15:14:23.877: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67 container projected-configmap-volume-test: STEP: delete the pod Jan 22 15:14:23.996: INFO: Waiting for pod pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67 to disappear Jan 22 15:14:24.025: INFO: Pod pod-projected-configmaps-43d4b408-9f3b-4347-9358-df86fed91e67 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 22 15:14:24.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3787" for this suite. Jan 22 15:14:30.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 15:14:30.238: INFO: namespace projected-3787 deletion completed in 6.175517424s • [SLOW TEST:16.624 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 22 15:14:30.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 22 15:14:30.336: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 19.561146ms)
Jan 22 15:14:30.345: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.623777ms)
Jan 22 15:14:30.359: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.576055ms)
Jan 22 15:14:30.376: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.823704ms)
Jan 22 15:14:30.381: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.9574ms)
Jan 22 15:14:30.385: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.625563ms)
Jan 22 15:14:30.391: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.999321ms)
Jan 22 15:14:30.397: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.511892ms)
Jan 22 15:14:30.405: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.704549ms)
Jan 22 15:14:30.411: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.571242ms)
Jan 22 15:14:30.420: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.776015ms)
Jan 22 15:14:30.427: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.641945ms)
Jan 22 15:14:30.432: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.775732ms)
Jan 22 15:14:30.440: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.510635ms)
Jan 22 15:14:30.447: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.9707ms)
Jan 22 15:14:30.457: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.878697ms)
Jan 22 15:14:30.466: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.250764ms)
Jan 22 15:14:30.473: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.978136ms)
Jan 22 15:14:30.479: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.829112ms)
Jan 22 15:14:30.496: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.512085ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 22 15:14:30.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3950" for this suite.
Jan 22 15:14:36.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 15:14:36.710: INFO: namespace proxy-3950 deletion completed in 6.208350711s

• [SLOW TEST:6.471 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 22 15:14:36.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 22 15:14:36.900: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd" in namespace "downward-api-8968" to be "success or failure"
Jan 22 15:14:37.013: INFO: Pod "downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd": Phase="Pending", Reason="", readiness=false. Elapsed: 113.040961ms
Jan 22 15:14:39.023: INFO: Pod "downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123359385s
Jan 22 15:14:41.043: INFO: Pod "downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142835592s
Jan 22 15:14:43.052: INFO: Pod "downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151922695s
Jan 22 15:14:45.061: INFO: Pod "downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161300291s
Jan 22 15:14:47.079: INFO: Pod "downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.17941347s
STEP: Saw pod success
Jan 22 15:14:47.080: INFO: Pod "downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd" satisfied condition "success or failure"
Jan 22 15:14:47.088: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd container client-container: 
STEP: delete the pod
Jan 22 15:14:47.162: INFO: Waiting for pod downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd to disappear
Jan 22 15:14:47.174: INFO: Pod downwardapi-volume-83eed67c-2f14-4d6e-bb4f-ed60d05241dd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 22 15:14:47.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8968" for this suite.
Jan 22 15:14:53.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 15:14:53.348: INFO: namespace downward-api-8968 deletion completed in 6.168940112s

• [SLOW TEST:16.637 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 22 15:14:53.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 22 15:14:53.489: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.897414ms)
Jan 22 15:14:53.497: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.34196ms)
Jan 22 15:14:53.502: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.152148ms)
Jan 22 15:14:53.506: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.138706ms)
Jan 22 15:14:53.513: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.921283ms)
Jan 22 15:14:53.520: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.138226ms)
Jan 22 15:14:53.528: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.460032ms)
Jan 22 15:14:53.533: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.667557ms)
Jan 22 15:14:53.539: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.429777ms)
Jan 22 15:14:53.543: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.177761ms)
Jan 22 15:14:53.547: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.912384ms)
Jan 22 15:14:53.552: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.458789ms)
Jan 22 15:14:53.557: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.904656ms)
Jan 22 15:14:53.594: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 36.17002ms)
Jan 22 15:14:53.599: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.392455ms)
Jan 22 15:14:53.604: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.602825ms)
Jan 22 15:14:53.609: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.603375ms)
Jan 22 15:14:53.614: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.702715ms)
Jan 22 15:14:53.619: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.086519ms)
Jan 22 15:14:53.623: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.27293ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 22 15:14:53.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3565" for this suite.
Jan 22 15:14:59.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 15:14:59.802: INFO: namespace proxy-3565 deletion completed in 6.174737741s

• [SLOW TEST:6.453 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 22 15:14:59.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-8574/configmap-test-35a0428f-d7da-43c8-baa8-306ecc1f4726
STEP: Creating a pod to test consume configMaps
Jan 22 15:14:59.937: INFO: Waiting up to 5m0s for pod "pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379" in namespace "configmap-8574" to be "success or failure"
Jan 22 15:14:59.956: INFO: Pod "pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379": Phase="Pending", Reason="", readiness=false. Elapsed: 18.503682ms
Jan 22 15:15:01.967: INFO: Pod "pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030127018s
Jan 22 15:15:03.980: INFO: Pod "pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042424918s
Jan 22 15:15:05.986: INFO: Pod "pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049180933s
Jan 22 15:15:07.992: INFO: Pod "pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055208567s
Jan 22 15:15:10.000: INFO: Pod "pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063090709s
STEP: Saw pod success
Jan 22 15:15:10.000: INFO: Pod "pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379" satisfied condition "success or failure"
Jan 22 15:15:10.005: INFO: Trying to get logs from node iruya-node pod pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379 container env-test: 
STEP: delete the pod
Jan 22 15:15:10.079: INFO: Waiting for pod pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379 to disappear
Jan 22 15:15:10.088: INFO: Pod pod-configmaps-fe5e0ae3-de53-4cb5-b8ea-7809e7dc6379 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 22 15:15:10.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8574" for this suite.
Jan 22 15:15:16.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 15:15:16.663: INFO: namespace configmap-8574 deletion completed in 6.553070436s

• [SLOW TEST:16.860 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 22 15:15:16.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 22 15:15:26.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1844" for this suite.
Jan 22 15:16:18.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 15:16:19.142: INFO: namespace kubelet-test-1844 deletion completed in 52.200339593s

• [SLOW TEST:62.479 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSJan 22 15:16:19.143: INFO: Running AfterSuite actions on all nodes
Jan 22 15:16:19.143: INFO: Running AfterSuite actions on node 1
Jan 22 15:16:19.143: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8366.628 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS