I0515 23:38:27.441504 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0515 23:38:27.441805 7 e2e.go:129] Starting e2e run "499df6d0-68de-42c3-ab3a-1c8bd4fa8149" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589585906 - Will randomize all specs Will run 288 of 5095 specs May 15 23:38:27.509: INFO: >>> kubeConfig: /root/.kube/config May 15 23:38:27.513: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 15 23:38:27.534: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 15 23:38:27.577: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 15 23:38:27.577: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 15 23:38:27.577: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 15 23:38:27.588: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 15 23:38:27.588: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 15 23:38:27.588: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 15 23:38:27.589: INFO: kube-apiserver version: v1.18.2 May 15 23:38:27.589: INFO: >>> kubeConfig: /root/.kube/config May 15 23:38:27.594: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:38:27.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets May 15 23:38:27.689: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-81a3d50b-1d81-49ca-a0ab-397a42ceb502 STEP: Creating a pod to test consume secrets May 15 23:38:27.702: INFO: Waiting up to 5m0s for pod "pod-secrets-9fc50d0b-7746-42d8-bc41-2d55e2637b9b" in namespace "secrets-8406" to be "Succeeded or Failed" May 15 23:38:27.718: INFO: Pod "pod-secrets-9fc50d0b-7746-42d8-bc41-2d55e2637b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.48606ms May 15 23:38:29.769: INFO: Pod "pod-secrets-9fc50d0b-7746-42d8-bc41-2d55e2637b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067024656s May 15 23:38:31.829: INFO: Pod "pod-secrets-9fc50d0b-7746-42d8-bc41-2d55e2637b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127244447s STEP: Saw pod success May 15 23:38:31.830: INFO: Pod "pod-secrets-9fc50d0b-7746-42d8-bc41-2d55e2637b9b" satisfied condition "Succeeded or Failed" May 15 23:38:31.833: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-9fc50d0b-7746-42d8-bc41-2d55e2637b9b container secret-volume-test: STEP: delete the pod May 15 23:38:32.055: INFO: Waiting for pod pod-secrets-9fc50d0b-7746-42d8-bc41-2d55e2637b9b to disappear May 15 23:38:32.060: INFO: Pod pod-secrets-9fc50d0b-7746-42d8-bc41-2d55e2637b9b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:38:32.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8406" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:38:32.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 15 23:38:32.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-3502 -- logs-generator --log-lines-total 100 --run-duration 20s' May 15 23:38:34.715: INFO: stderr: "" May 15 23:38:34.715: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 15 23:38:34.715: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 15 23:38:34.715: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3502" to be "running and ready, or succeeded" May 15 23:38:34.769: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 53.931526ms May 15 23:38:36.774: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059120067s May 15 23:38:38.779: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.064049584s May 15 23:38:38.779: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 15 23:38:38.779: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 15 23:38:38.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3502' May 15 23:38:38.912: INFO: stderr: "" May 15 23:38:38.912: INFO: stdout: "I0515 23:38:37.220015 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/jb4s 503\nI0515 23:38:37.420064 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/dngr 420\nI0515 23:38:37.620161 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/qszj 340\nI0515 23:38:37.820208 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/47g 335\nI0515 23:38:38.020205 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/j25b 458\nI0515 23:38:38.220161 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/prc9 242\nI0515 23:38:38.420167 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/9sx2 461\nI0515 23:38:38.620263 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/tr7 485\nI0515 23:38:38.820195 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/ntj 252\n" STEP: limiting log lines May 15 23:38:38.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3502 --tail=1' May 15 23:38:39.034: INFO: stderr: "" May 15 23:38:39.034: INFO: stdout: "I0515 23:38:39.020156 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/lwx 235\n" May 15 23:38:39.034: INFO: got output "I0515 23:38:39.020156 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/lwx 235\n" STEP: limiting log bytes May 15 23:38:39.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3502 --limit-bytes=1' May 15 23:38:39.157: INFO: stderr: "" May 15 23:38:39.157: INFO: stdout: "I" May 15 23:38:39.157: INFO: got output "I" STEP: exposing timestamps May 15 23:38:39.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3502 --tail=1 --timestamps' May 15 23:38:39.273: INFO: stderr: "" May 15 23:38:39.273: INFO: stdout: "2020-05-15T23:38:39.220288127Z I0515 23:38:39.220149 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/n6lb 375\n" May 15 23:38:39.273: INFO: got output "2020-05-15T23:38:39.220288127Z I0515 23:38:39.220149 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/n6lb 375\n" STEP: restricting to a time range May 15 23:38:41.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3502 --since=1s' May 15 23:38:41.878: INFO: stderr: "" May 15 23:38:41.878: INFO: stdout: "I0515 23:38:41.020215 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/fbs 526\nI0515 23:38:41.220185 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/l5cg 207\nI0515 23:38:41.420168 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/vdng 320\nI0515 23:38:41.620132 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/nwl 286\nI0515 23:38:41.820198 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/swz 202\n" May 15 23:38:41.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3502 --since=24h' May 15 23:38:41.996: INFO: stderr: "" May 15 23:38:41.996: INFO: stdout: "I0515 23:38:37.220015 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/jb4s 503\nI0515 23:38:37.420064 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/dngr 420\nI0515 23:38:37.620161 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/qszj 340\nI0515 23:38:37.820208 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/47g 335\nI0515 23:38:38.020205 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/j25b 458\nI0515 23:38:38.220161 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/prc9 242\nI0515 23:38:38.420167 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/9sx2 461\nI0515 23:38:38.620263 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/tr7 485\nI0515 23:38:38.820195 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/ntj 252\nI0515 23:38:39.020156 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/lwx 235\nI0515 23:38:39.220149 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/n6lb 375\nI0515 23:38:39.420252 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/998n 579\nI0515 23:38:39.620221 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/bzrn 342\nI0515 23:38:39.820196 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/tgl 519\nI0515 23:38:40.020205 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/ft8z 560\nI0515 23:38:40.220174 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/tmh 313\nI0515 23:38:40.420201 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/dl4s 284\nI0515 23:38:40.620173 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/px2p 231\nI0515 23:38:40.820206 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/bfpr 291\nI0515 23:38:41.020215 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/fbs 526\nI0515 23:38:41.220185 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/l5cg 207\nI0515 23:38:41.420168 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/vdng 320\nI0515 23:38:41.620132 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/nwl 286\nI0515 23:38:41.820198 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/swz 202\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 15 23:38:41.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3502' May 15 23:38:54.851: INFO: stderr: "" May 15 23:38:54.851: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:38:54.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3502" for this suite. • [SLOW TEST:22.790 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":2,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:38:54.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:38:54.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6786" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":3,"skipped":52,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:38:54.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9038 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9038 I0515 23:38:55.177055 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9038, replica count: 2 I0515 23:38:58.227645 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:39:01.227925 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 23:39:01.227: INFO: Creating new exec pod May 15 23:39:06.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9038 execpodpjzfp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 15 23:39:06.501: INFO: stderr: "I0515 23:39:06.391821 205 log.go:172] (0xc000959340) (0xc0009b2280) Create stream\nI0515 23:39:06.391887 205 log.go:172] (0xc000959340) (0xc0009b2280) Stream added, broadcasting: 1\nI0515 23:39:06.394726 205 log.go:172] (0xc000959340) Reply frame received for 1\nI0515 23:39:06.394770 205 log.go:172] (0xc000959340) (0xc000830960) Create stream\nI0515 23:39:06.394783 205 log.go:172] (0xc000959340) (0xc000830960) Stream added, broadcasting: 3\nI0515 23:39:06.395538 205 log.go:172] (0xc000959340) Reply frame received for 3\nI0515 23:39:06.395565 205 log.go:172] (0xc000959340) (0xc000830e60) Create stream\nI0515 23:39:06.395574 205 log.go:172] (0xc000959340) (0xc000830e60) Stream added, broadcasting: 5\nI0515 23:39:06.396488 205 log.go:172] (0xc000959340) Reply frame received for 5\nI0515 23:39:06.487866 205 log.go:172] (0xc000959340) Data frame received for 5\nI0515 23:39:06.487923 205 log.go:172] (0xc000830e60) (5) Data frame handling\nI0515 23:39:06.487958 205 log.go:172] (0xc000830e60) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0515 23:39:06.494529 205 log.go:172] (0xc000959340) Data frame received for 5\nI0515 23:39:06.494612 205 log.go:172] (0xc000830e60) (5) Data frame handling\nI0515 23:39:06.494631 205 log.go:172] (0xc000830e60) (5) Data frame sent\nI0515 23:39:06.494640 205 log.go:172] (0xc000959340) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0515 23:39:06.494648 205 log.go:172] (0xc000830e60) (5) Data frame handling\nI0515 23:39:06.494665 205 log.go:172] (0xc000959340) Data frame received for 3\nI0515 23:39:06.494725 205 log.go:172] (0xc000830960) (3) Data frame handling\nI0515 23:39:06.496192 205 log.go:172] (0xc000959340) Data frame received for 1\nI0515 23:39:06.496209 205 log.go:172] (0xc0009b2280) (1) Data frame handling\nI0515 23:39:06.496225 205 log.go:172] (0xc0009b2280) (1) Data frame sent\nI0515 23:39:06.496244 205 log.go:172] (0xc000959340) (0xc0009b2280) Stream removed, broadcasting: 1\nI0515 23:39:06.496257 205 log.go:172] (0xc000959340) Go away received\nI0515 23:39:06.497068 205 log.go:172] (0xc000959340) (0xc0009b2280) Stream removed, broadcasting: 1\nI0515 23:39:06.497101 205 log.go:172] (0xc000959340) (0xc000830960) Stream removed, broadcasting: 3\nI0515 23:39:06.497267 205 log.go:172] (0xc000959340) (0xc000830e60) Stream removed, broadcasting: 5\n" May 15 23:39:06.501: INFO: stdout: "" May 15 23:39:06.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9038 execpodpjzfp -- /bin/sh -x -c nc -zv -t -w 2 10.98.231.230 80' May 15 23:39:06.741: INFO: stderr: "I0515 23:39:06.674560 225 log.go:172] (0xc000b4f290) (0xc000a585a0) Create stream\nI0515 23:39:06.674606 225 log.go:172] (0xc000b4f290) (0xc000a585a0) Stream added, broadcasting: 1\nI0515 23:39:06.679663 225 log.go:172] (0xc000b4f290) Reply frame received for 1\nI0515 23:39:06.679725 225 log.go:172] (0xc000b4f290) (0xc000832a00) Create stream\nI0515 23:39:06.679758 225 log.go:172] (0xc000b4f290) (0xc000832a00) Stream added, broadcasting: 3\nI0515 23:39:06.680708 225 log.go:172] (0xc000b4f290) Reply frame received for 3\nI0515 23:39:06.680751 225 log.go:172] (0xc000b4f290) (0xc0008334a0) Create stream\nI0515 23:39:06.680766 225 log.go:172] (0xc000b4f290) (0xc0008334a0) Stream added, broadcasting: 5\nI0515 23:39:06.681861 225 log.go:172] (0xc000b4f290) Reply frame received for 5\nI0515 23:39:06.735502 225 log.go:172] (0xc000b4f290) Data frame received for 5\nI0515 23:39:06.735562 225 log.go:172] (0xc0008334a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.231.230 80\nConnection to 10.98.231.230 80 port [tcp/http] succeeded!\nI0515 23:39:06.735589 225 log.go:172] (0xc000b4f290) Data frame received for 3\nI0515 23:39:06.735609 225 log.go:172] (0xc000832a00) (3) Data frame handling\nI0515 23:39:06.735637 225 log.go:172] (0xc0008334a0) (5) Data frame sent\nI0515 23:39:06.735646 225 log.go:172] (0xc000b4f290) Data frame received for 5\nI0515 23:39:06.735654 225 log.go:172] (0xc0008334a0) (5) Data frame handling\nI0515 23:39:06.736570 225 log.go:172] (0xc000b4f290) Data frame received for 1\nI0515 23:39:06.736579 225 log.go:172] (0xc000a585a0) (1) Data frame handling\nI0515 23:39:06.736600 225 log.go:172] (0xc000a585a0) (1) Data frame sent\nI0515 23:39:06.736606 225 log.go:172] (0xc000b4f290) (0xc000a585a0) Stream removed, broadcasting: 1\nI0515 23:39:06.736613 225 log.go:172] (0xc000b4f290) Go away received\nI0515 23:39:06.737017 225 log.go:172] (0xc000b4f290) (0xc000a585a0) Stream removed, broadcasting: 1\nI0515 23:39:06.737042 225 log.go:172] (0xc000b4f290) (0xc000832a00) Stream removed, broadcasting: 3\nI0515 23:39:06.737053 225 log.go:172] (0xc000b4f290) (0xc0008334a0) Stream removed, broadcasting: 5\n" May 15 23:39:06.741: INFO: stdout: "" May 15 23:39:06.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9038 execpodpjzfp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31817' May 15 23:39:06.934: INFO: stderr: "I0515 23:39:06.859495 245 log.go:172] (0xc000a8d340) (0xc000814640) Create stream\nI0515 23:39:06.859543 245 log.go:172] (0xc000a8d340) (0xc000814640) Stream added, broadcasting: 1\nI0515 23:39:06.863057 245 log.go:172] (0xc000a8d340) Reply frame received for 1\nI0515 23:39:06.863124 245 log.go:172] (0xc000a8d340) (0xc00081adc0) Create stream\nI0515 23:39:06.863148 245 log.go:172] (0xc000a8d340) (0xc00081adc0) Stream added, broadcasting: 3\nI0515 23:39:06.865615 245 log.go:172] (0xc000a8d340) Reply frame received for 3\nI0515 23:39:06.865644 245 log.go:172] (0xc000a8d340) (0xc00080cb40) Create stream\nI0515 23:39:06.865657 245 log.go:172] (0xc000a8d340) (0xc00080cb40) Stream added, broadcasting: 5\nI0515 23:39:06.866396 245 log.go:172] (0xc000a8d340) Reply frame received for 5\nI0515 23:39:06.928681 245 log.go:172] (0xc000a8d340) Data frame received for 5\nI0515 23:39:06.928706 245 log.go:172] (0xc00080cb40) (5) Data frame handling\nI0515 23:39:06.928715 245 log.go:172] (0xc00080cb40) (5) Data frame sent\nI0515 23:39:06.928722 245 log.go:172] (0xc000a8d340) Data frame received for 5\nI0515 23:39:06.928728 245 log.go:172] (0xc00080cb40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31817\nConnection to 172.17.0.13 31817 port [tcp/31817] succeeded!\nI0515 23:39:06.928753 245 log.go:172] (0xc000a8d340) Data frame received for 3\nI0515 23:39:06.928761 245 log.go:172] (0xc00081adc0) (3) Data frame handling\nI0515 23:39:06.930208 245 log.go:172] (0xc000a8d340) Data frame received for 1\nI0515 23:39:06.930246 245 log.go:172] (0xc000814640) (1) Data frame handling\nI0515 23:39:06.930271 245 log.go:172] (0xc000814640) (1) Data frame sent\nI0515 23:39:06.930310 245 log.go:172] (0xc000a8d340) (0xc000814640) Stream removed, broadcasting: 1\nI0515 23:39:06.930343 245 log.go:172] (0xc000a8d340) Go away received\nI0515 23:39:06.930733 245 log.go:172] (0xc000a8d340) (0xc000814640) Stream removed, broadcasting: 1\nI0515 23:39:06.930760 245 log.go:172] (0xc000a8d340) (0xc00081adc0) Stream removed, broadcasting: 3\nI0515 23:39:06.930773 245 log.go:172] (0xc000a8d340) (0xc00080cb40) Stream removed, broadcasting: 5\n" May 15 23:39:06.935: INFO: stdout: "" May 15 23:39:06.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9038 execpodpjzfp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31817' May 15 23:39:07.118: INFO: stderr: "I0515 23:39:07.057877 265 log.go:172] (0xc000ae11e0) (0xc000a7e3c0) Create stream\nI0515 23:39:07.057931 265 log.go:172] (0xc000ae11e0) (0xc000a7e3c0) Stream added, broadcasting: 1\nI0515 23:39:07.062954 265 log.go:172] (0xc000ae11e0) Reply frame received for 1\nI0515 23:39:07.062992 265 log.go:172] (0xc000ae11e0) (0xc0006ac500) Create stream\nI0515 23:39:07.063001 265 log.go:172] (0xc000ae11e0) (0xc0006ac500) Stream added, broadcasting: 3\nI0515 23:39:07.063810 265 log.go:172] (0xc000ae11e0) Reply frame received for 3\nI0515 23:39:07.063866 265 log.go:172] (0xc000ae11e0) (0xc0005321e0) Create stream\nI0515 23:39:07.063894 265 log.go:172] (0xc000ae11e0) (0xc0005321e0) Stream added, broadcasting: 5\nI0515 23:39:07.064720 265 log.go:172] (0xc000ae11e0) Reply frame received for 5\nI0515 23:39:07.113262 265 log.go:172] (0xc000ae11e0) Data frame received for 5\nI0515 23:39:07.113290 265 log.go:172] (0xc0005321e0) (5) Data frame handling\nI0515 23:39:07.113306 265 log.go:172] (0xc0005321e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31817\nConnection to 172.17.0.12 31817 port [tcp/31817] succeeded!\nI0515 23:39:07.113328 265 log.go:172] (0xc000ae11e0) Data frame received for 3\nI0515 23:39:07.113337 265 log.go:172] (0xc0006ac500) (3) Data frame handling\nI0515 23:39:07.113383 265 log.go:172] (0xc000ae11e0) Data frame received for 5\nI0515 23:39:07.113404 265 log.go:172] (0xc0005321e0) (5) Data frame handling\nI0515 23:39:07.114415 265 log.go:172] (0xc000ae11e0) Data frame received for 1\nI0515 23:39:07.114431 265 log.go:172] (0xc000a7e3c0) (1) Data frame handling\nI0515 23:39:07.114438 265 log.go:172] (0xc000a7e3c0) (1) Data frame sent\nI0515 23:39:07.114445 265 log.go:172] (0xc000ae11e0) (0xc000a7e3c0) Stream removed, broadcasting: 1\nI0515 23:39:07.114486 265 log.go:172] (0xc000ae11e0) Go away received\nI0515 23:39:07.114654 265 log.go:172] (0xc000ae11e0) (0xc000a7e3c0) Stream removed, broadcasting: 1\nI0515 23:39:07.114673 265 log.go:172] (0xc000ae11e0) (0xc0006ac500) Stream removed, broadcasting: 3\nI0515 23:39:07.114679 265 log.go:172] (0xc000ae11e0) (0xc0005321e0) Stream removed, broadcasting: 5\n" May 15 23:39:07.118: INFO: stdout: "" May 15 23:39:07.118: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:39:07.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9038" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.221 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":4,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:39:07.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:39:07.270: INFO: Create a RollingUpdate DaemonSet May 15 23:39:07.273: INFO: Check that daemon pods launch on every node of the cluster May 15 23:39:07.292: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:07.332: INFO: Number of nodes with available pods: 0 May 15 23:39:07.332: INFO: Node latest-worker is running more than one daemon pod May 15 23:39:08.337: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:08.340: INFO: Number of nodes with available pods: 0 May 15 23:39:08.340: INFO: Node latest-worker is running more than one daemon pod May 15 23:39:09.336: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:09.339: INFO: Number of nodes with available pods: 0 May 15 23:39:09.339: INFO: Node latest-worker is running more than one daemon pod May 15 23:39:10.460: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:10.464: INFO: Number of nodes with available pods: 0 May 15 23:39:10.464: INFO: Node latest-worker is running more than one daemon pod May 15 23:39:11.352: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:11.356: INFO: Number of nodes with available pods: 0 May 15 23:39:11.356: INFO: Node latest-worker is running more than one daemon pod May 15 23:39:12.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:12.344: INFO: Number of nodes with available pods: 1 May 15 23:39:12.344: INFO: Node latest-worker2 is running more than one daemon pod May 15 23:39:13.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:13.343: INFO: Number of nodes with available pods: 2 May 15 23:39:13.343: INFO: Number of running nodes: 2, number of available pods: 2 May 15 23:39:13.343: INFO: Update the DaemonSet to trigger a rollout May 15 23:39:13.351: INFO: Updating DaemonSet daemon-set May 15 23:39:25.423: INFO: Roll back the DaemonSet before rollout is complete May 15 23:39:25.439: INFO: Updating DaemonSet daemon-set May 15 23:39:25.439: INFO: Make sure DaemonSet rollback is complete May 15 23:39:25.446: INFO: Wrong image for pod: daemon-set-zgkqb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 15 23:39:25.446: INFO: Pod daemon-set-zgkqb is not available May 15 23:39:25.496: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:26.501: INFO: Wrong image for pod: daemon-set-zgkqb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 15 23:39:26.501: INFO: Pod daemon-set-zgkqb is not available May 15 23:39:26.505: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:27.500: INFO: Wrong image for pod: daemon-set-zgkqb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 15 23:39:27.500: INFO: Pod daemon-set-zgkqb is not available May 15 23:39:27.503: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 23:39:28.501: INFO: Pod daemon-set-6nc8n is not available May 15 23:39:28.507: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3693, will wait for the garbage collector to delete the pods May 15 23:39:28.570: INFO: Deleting DaemonSet.extensions daemon-set took: 5.013411ms May 15 23:39:28.670: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.221496ms May 15 23:39:31.774: INFO: Number of nodes with available pods: 0 May 15 23:39:31.774: INFO: Number of running nodes: 0, number of available pods: 0 May 15 23:39:31.780: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3693/daemonsets","resourceVersion":"4993759"},"items":null} May 15 23:39:31.783: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3693/pods","resourceVersion":"4993759"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:39:31.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3693" for this suite. • [SLOW TEST:24.622 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":5,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:39:31.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 15 23:39:31.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6393' May 15 23:39:32.294: INFO: stderr: "" May 15 23:39:32.294: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 15 23:39:33.298: INFO: Selector matched 1 pods for map[app:agnhost] May 15 23:39:33.298: INFO: Found 0 / 1 May 15 23:39:34.308: INFO: Selector matched 1 pods for map[app:agnhost] May 15 23:39:34.308: INFO: Found 0 / 1 May 15 23:39:35.298: INFO: Selector matched 1 pods for map[app:agnhost] May 15 23:39:35.298: INFO: Found 0 / 1 May 15 23:39:36.298: INFO: Selector matched 1 pods for map[app:agnhost] May 15 23:39:36.298: INFO: Found 1 / 1 May 15 23:39:36.298: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 15 23:39:36.301: INFO: Selector matched 1 pods for map[app:agnhost] May 15 23:39:36.301: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 23:39:36.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-xpmh5 --namespace=kubectl-6393 -p {"metadata":{"annotations":{"x":"y"}}}' May 15 23:39:36.399: INFO: stderr: "" May 15 23:39:36.400: INFO: stdout: "pod/agnhost-master-xpmh5 patched\n" STEP: checking annotations May 15 23:39:36.435: INFO: Selector matched 1 pods for map[app:agnhost] May 15 23:39:36.435: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:39:36.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6393" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":6,"skipped":147,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:39:36.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9404.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9404.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 23:39:44.755: INFO: DNS probes using dns-test-883139fc-7f45-436d-836e-c8828fe1bc24 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9404.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9404.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 23:39:52.907: INFO: File wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local from pod dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 23:39:52.910: INFO: File jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local from pod dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 23:39:52.910: INFO: Lookups using dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d failed for: [wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local] May 15 23:39:57.915: INFO: File wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local from pod dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 23:39:57.917: INFO: File jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local from pod dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 23:39:57.917: INFO: Lookups using dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d failed for: [wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local] May 15 23:40:02.953: INFO: File wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local from pod dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 23:40:02.957: INFO: File jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local from pod dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 23:40:02.957: INFO: Lookups using dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d failed for: [wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local] May 15 23:40:07.916: INFO: File wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local from pod dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 23:40:07.920: INFO: File jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local from pod dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 23:40:07.920: INFO: Lookups using dns-9404/dns-test-457d9599-41cf-4629-8561-1065db96751d failed for: [wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local] May 15 23:40:12.919: INFO: DNS probes using dns-test-457d9599-41cf-4629-8561-1065db96751d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9404.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9404.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9404.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9404.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 23:40:21.824: INFO: DNS probes using dns-test-95d57f94-1042-49ed-aee8-2d6dd63d3b2a succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:40:21.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9404" for this suite. • [SLOW TEST:45.914 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":7,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:40:22.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:40:22.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8551" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":8,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:40:22.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:40:22.628: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6516 I0515 23:40:22.640856 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6516, replica count: 1 I0515 23:40:23.691313 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:40:24.691569 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:40:25.691789 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:40:26.691955 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 23:40:26.828: INFO: Created: latency-svc-5ftn4 May 15 23:40:26.849: INFO: Got endpoints: latency-svc-5ftn4 [57.362321ms] May 15 23:40:26.950: INFO: Created: latency-svc-jzqr4 May 15 23:40:26.953: INFO: Got endpoints: latency-svc-jzqr4 [104.181003ms] May 15 23:40:26.997: INFO: Created: latency-svc-qnjww May 15 23:40:27.124: INFO: Got endpoints: latency-svc-qnjww [274.479152ms] May 15 23:40:27.137: INFO: Created: latency-svc-5959z May 15 23:40:27.144: INFO: Got endpoints: latency-svc-5959z [294.571027ms] May 15 23:40:27.170: INFO: Created: latency-svc-dzq9k May 15 23:40:27.201: INFO: Got endpoints: latency-svc-dzq9k [352.397606ms] May 15 23:40:27.279: INFO: Created: latency-svc-gjgzf May 15 23:40:27.287: INFO: Got endpoints: latency-svc-gjgzf [437.752311ms] May 15 23:40:27.308: INFO: Created: latency-svc-xh5d9 May 15 23:40:27.323: INFO: Got endpoints: latency-svc-xh5d9 [473.735258ms] May 15 23:40:27.429: INFO: Created: latency-svc-64tb9 May 15 23:40:27.459: INFO: Got endpoints: latency-svc-64tb9 [609.776053ms] May 15 23:40:27.488: INFO: Created: latency-svc-9nrvr May 15 23:40:27.504: INFO: Got endpoints: latency-svc-9nrvr [654.340611ms] May 15 23:40:27.581: INFO: Created: latency-svc-648xz May 15 23:40:27.588: INFO: Got endpoints: latency-svc-648xz [738.33064ms] May 15 23:40:27.608: INFO: Created: latency-svc-hcwq6 May 15 23:40:27.624: INFO: Got endpoints: latency-svc-hcwq6 [774.891379ms] May 15 23:40:27.668: INFO: Created: latency-svc-kmlqt May 15 23:40:27.722: INFO: Got endpoints: latency-svc-kmlqt [872.82132ms] May 15 23:40:27.752: INFO: Created: latency-svc-48x24 May 15 23:40:27.769: INFO: Got endpoints: latency-svc-48x24 [919.782639ms] May 15 23:40:27.819: INFO: Created: latency-svc-rzdqb May 15 23:40:27.866: INFO: Got endpoints: latency-svc-rzdqb [1.016508325s] May 15 23:40:27.915: INFO: Created: latency-svc-dffhx May 15 23:40:27.926: INFO: Got endpoints: latency-svc-dffhx [1.076547178s] May 15 23:40:28.010: INFO: Created: latency-svc-m4dmb May 15 23:40:28.022: INFO: Got endpoints: latency-svc-m4dmb [1.172428998s] May 15 23:40:28.106: INFO: Created: latency-svc-95gsd May 15 23:40:28.165: INFO: Got endpoints: latency-svc-95gsd [1.211802108s] May 15 23:40:28.196: INFO: Created: latency-svc-wlg6z May 15 23:40:28.214: INFO: Got endpoints: latency-svc-wlg6z [1.089857209s] May 15 23:40:28.263: INFO: Created: latency-svc-ks754 May 15 23:40:28.310: INFO: Got endpoints: latency-svc-ks754 [1.166244274s] May 15 23:40:28.335: INFO: Created: latency-svc-mp4zj May 15 23:40:28.346: INFO: Got endpoints: latency-svc-mp4zj [1.144844393s] May 15 23:40:28.395: INFO: Created: latency-svc-tnbxz May 15 23:40:28.459: INFO: Got endpoints: latency-svc-tnbxz [1.171675835s] May 15 23:40:28.462: INFO: Created: latency-svc-b54np May 15 23:40:28.472: INFO: Got endpoints: latency-svc-b54np [1.149162826s] May 15 23:40:28.508: INFO: Created: latency-svc-t9l75 May 15 23:40:28.620: INFO: Got endpoints: latency-svc-t9l75 [1.160947069s] May 15 23:40:28.676: INFO: Created: latency-svc-jn69g May 15 23:40:28.719: INFO: Got endpoints: latency-svc-jn69g [1.214827607s] May 15 23:40:28.785: INFO: Created: latency-svc-q2hxm May 15 23:40:28.827: INFO: Got endpoints: latency-svc-q2hxm [1.239301153s] May 15 23:40:28.956: INFO: Created: latency-svc-kmqzd May 15 23:40:29.007: INFO: Got endpoints: latency-svc-kmqzd [1.382552659s] May 15 23:40:29.153: INFO: Created: latency-svc-xpn2r May 15 23:40:29.175: INFO: Got endpoints: latency-svc-xpn2r [1.452832903s] May 15 23:40:29.242: INFO: Created: latency-svc-9sd5b May 15 23:40:29.345: INFO: Got endpoints: latency-svc-9sd5b [1.576011295s] May 15 23:40:29.386: INFO: Created: latency-svc-fp69b May 15 23:40:29.435: INFO: Got endpoints: latency-svc-fp69b [1.569283976s] May 15 23:40:29.497: INFO: Created: latency-svc-4bc6k May 15 23:40:29.548: INFO: Got endpoints: latency-svc-4bc6k [1.622257814s] May 15 23:40:29.584: INFO: Created: latency-svc-n9wrq May 15 23:40:29.657: INFO: Got endpoints: latency-svc-n9wrq [1.634983157s] May 15 23:40:29.692: INFO: Created: latency-svc-ll8g2 May 15 23:40:29.734: INFO: Got endpoints: latency-svc-ll8g2 [1.569251843s] May 15 23:40:29.832: INFO: Created: latency-svc-j82ft May 15 23:40:29.844: INFO: Got endpoints: latency-svc-j82ft [1.630594263s] May 15 23:40:29.920: INFO: Created: latency-svc-k8qxc May 15 23:40:30.005: INFO: Got endpoints: latency-svc-k8qxc [1.695087595s] May 15 23:40:30.089: INFO: Created: latency-svc-tlrpt May 15 23:40:30.172: INFO: Got endpoints: latency-svc-tlrpt [1.82518739s] May 15 23:40:30.222: INFO: Created: latency-svc-5r2jq May 15 23:40:30.247: INFO: Got endpoints: latency-svc-5r2jq [1.787952846s] May 15 23:40:30.321: INFO: Created: latency-svc-5vhmh May 15 23:40:30.334: INFO: Got endpoints: latency-svc-5vhmh [1.861359354s] May 15 23:40:30.364: INFO: Created: latency-svc-gh8kt May 15 23:40:30.394: INFO: Got endpoints: latency-svc-gh8kt [1.774244959s] May 15 23:40:30.489: INFO: Created: latency-svc-b8pdj May 15 23:40:30.498: INFO: Got endpoints: latency-svc-b8pdj [1.779067397s] May 15 23:40:30.538: INFO: Created: latency-svc-r5tsm May 15 23:40:30.551: INFO: Got endpoints: latency-svc-r5tsm [1.724021589s] May 15 23:40:30.580: INFO: Created: latency-svc-gkshd May 15 23:40:30.657: INFO: Got endpoints: latency-svc-gkshd [1.64964712s] May 15 23:40:30.719: INFO: Created: latency-svc-rjlvm May 15 23:40:30.731: INFO: Got endpoints: latency-svc-rjlvm [1.55628754s] May 15 23:40:30.812: INFO: Created: latency-svc-7z6v8 May 15 23:40:30.822: INFO: Got endpoints: latency-svc-7z6v8 [1.477052901s] May 15 23:40:30.844: INFO: Created: latency-svc-4nmqb May 15 23:40:30.858: INFO: Got endpoints: latency-svc-4nmqb [1.423163773s] May 15 23:40:30.910: INFO: Created: latency-svc-j7wnv May 15 23:40:30.968: INFO: Got endpoints: latency-svc-j7wnv [1.419583296s] May 15 23:40:31.011: INFO: Created: latency-svc-5tgxf May 15 23:40:31.060: INFO: Got endpoints: latency-svc-5tgxf [1.403685349s] May 15 23:40:31.132: INFO: Created: latency-svc-lm4dl May 15 23:40:31.147: INFO: Got endpoints: latency-svc-lm4dl [1.412317172s] May 15 23:40:31.174: INFO: Created: latency-svc-xrp9c May 15 23:40:31.189: INFO: Got endpoints: latency-svc-xrp9c [1.344527927s] May 15 23:40:31.293: INFO: Created: latency-svc-mp4mc May 15 23:40:31.296: INFO: Got endpoints: latency-svc-mp4mc [1.290868412s] May 15 23:40:31.331: INFO: Created: latency-svc-s5t7t May 15 23:40:31.339: INFO: Got endpoints: latency-svc-s5t7t [1.16722605s] May 15 23:40:31.381: INFO: Created: latency-svc-b7lsw May 15 23:40:31.438: INFO: Got endpoints: latency-svc-b7lsw [1.191080232s] May 15 23:40:31.479: INFO: Created: latency-svc-f7kjz May 15 23:40:31.567: INFO: Got endpoints: latency-svc-f7kjz [1.23316551s] May 15 23:40:31.600: INFO: Created: latency-svc-b9zdh May 15 23:40:31.616: INFO: Got endpoints: latency-svc-b9zdh [1.221425557s] May 15 23:40:31.642: INFO: Created: latency-svc-pm7v2 May 15 23:40:31.658: INFO: Got endpoints: latency-svc-pm7v2 [1.160452133s] May 15 23:40:31.710: INFO: Created: latency-svc-lttkn May 15 23:40:31.724: INFO: Got endpoints: latency-svc-lttkn [1.173131093s] May 15 23:40:31.750: INFO: Created: latency-svc-m95lw May 15 23:40:31.761: INFO: Got endpoints: latency-svc-m95lw [1.104429654s] May 15 23:40:31.786: INFO: Created: latency-svc-f87zs May 15 23:40:31.798: INFO: Got endpoints: latency-svc-f87zs [1.06609845s] May 15 23:40:31.887: INFO: Created: latency-svc-jgxvl May 15 23:40:31.923: INFO: Got endpoints: latency-svc-jgxvl [1.100557845s] May 15 23:40:32.036: INFO: Created: latency-svc-rnmqz May 15 23:40:32.050: INFO: Got endpoints: latency-svc-rnmqz [1.191030855s] May 15 23:40:32.074: INFO: Created: latency-svc-hfc5t May 15 23:40:32.098: INFO: Got endpoints: latency-svc-hfc5t [1.130052335s] May 15 23:40:32.128: INFO: Created: latency-svc-xpm7l May 15 23:40:32.195: INFO: Got endpoints: latency-svc-xpm7l [1.134740851s] May 15 23:40:32.229: INFO: Created: latency-svc-lmvnt May 15 23:40:32.242: INFO: Got endpoints: latency-svc-lmvnt [1.095400697s] May 15 23:40:32.345: INFO: Created: latency-svc-c978k May 15 23:40:32.356: INFO: Got endpoints: latency-svc-c978k [1.167351312s] May 15 23:40:32.380: INFO: Created: latency-svc-72gvt May 15 23:40:32.398: INFO: Got endpoints: latency-svc-72gvt [1.102292095s] May 15 23:40:32.508: INFO: Created: latency-svc-8szgd May 15 23:40:32.520: INFO: Got endpoints: latency-svc-8szgd [1.18130307s] May 15 23:40:32.554: INFO: Created: latency-svc-swxps May 15 23:40:32.567: INFO: Got endpoints: latency-svc-swxps [1.129466639s] May 15 23:40:32.584: INFO: Created: latency-svc-9dfdz May 15 23:40:32.598: INFO: Got endpoints: latency-svc-9dfdz [1.030760501s] May 15 23:40:32.692: INFO: Created: latency-svc-8ncxg May 15 23:40:32.711: INFO: Got endpoints: latency-svc-8ncxg [1.095414661s] May 15 23:40:32.734: INFO: Created: latency-svc-wttxm May 15 23:40:32.758: INFO: Got endpoints: latency-svc-wttxm [1.099772865s] May 15 23:40:32.836: INFO: Created: latency-svc-xdlwx May 15 23:40:32.854: INFO: Got endpoints: latency-svc-xdlwx [1.129212332s] May 15 23:40:32.890: INFO: Created: latency-svc-ngxzg May 15 23:40:32.920: INFO: Got endpoints: latency-svc-ngxzg [1.15912754s] May 15 23:40:32.986: INFO: Created: latency-svc-5s8cz May 15 23:40:32.996: INFO: Got endpoints: latency-svc-5s8cz [1.19871385s] May 15 23:40:33.022: INFO: Created: latency-svc-mbbsx May 15 23:40:33.082: INFO: Got endpoints: latency-svc-mbbsx [1.159087465s] May 15 23:40:33.244: INFO: Created: latency-svc-7mhkg May 15 23:40:33.289: INFO: Got endpoints: latency-svc-7mhkg [1.239315812s] May 15 23:40:33.407: INFO: Created: latency-svc-vpstq May 15 23:40:33.421: INFO: Got endpoints: latency-svc-vpstq [1.32247788s] May 15 23:40:33.455: INFO: Created: latency-svc-t7dtx May 15 23:40:33.490: INFO: Got endpoints: latency-svc-t7dtx [1.295080232s] May 15 23:40:33.833: INFO: Created: latency-svc-78664 May 15 23:40:33.843: INFO: Created: latency-svc-w75bl May 15 23:40:33.843: INFO: Got endpoints: latency-svc-78664 [1.600865118s] May 15 23:40:33.890: INFO: Got endpoints: latency-svc-w75bl [1.533577249s] May 15 23:40:34.005: INFO: Created: latency-svc-4np28 May 15 23:40:34.037: INFO: Created: latency-svc-5c9cj May 15 23:40:34.038: INFO: Got endpoints: latency-svc-4np28 [1.639233862s] May 15 23:40:34.052: INFO: Got endpoints: latency-svc-5c9cj [1.531566538s] May 15 23:40:34.091: INFO: Created: latency-svc-pdq9l May 15 23:40:34.183: INFO: Got endpoints: latency-svc-pdq9l [1.616134197s] May 15 23:40:34.187: INFO: Created: latency-svc-65lbx May 15 23:40:34.256: INFO: Got endpoints: latency-svc-65lbx [1.658811876s] May 15 23:40:34.357: INFO: Created: latency-svc-l6n6s May 15 23:40:34.394: INFO: Got endpoints: latency-svc-l6n6s [1.683057902s] May 15 23:40:34.429: INFO: Created: latency-svc-wmdcc May 15 23:40:34.513: INFO: Got endpoints: latency-svc-wmdcc [1.754569851s] May 15 23:40:34.548: INFO: Created: latency-svc-q85rd May 15 23:40:34.575: INFO: Got endpoints: latency-svc-q85rd [1.721500571s] May 15 23:40:34.663: INFO: Created: latency-svc-96vhk May 15 23:40:34.678: INFO: Got endpoints: latency-svc-96vhk [1.757343113s] May 15 23:40:34.704: INFO: Created: latency-svc-nvjh9 May 15 23:40:34.762: INFO: Got endpoints: latency-svc-nvjh9 [1.76570503s] May 15 23:40:34.807: INFO: Created: latency-svc-p6lls May 15 23:40:34.810: INFO: Got endpoints: latency-svc-p6lls [1.728376686s] May 15 23:40:34.867: INFO: Created: latency-svc-rts79 May 15 23:40:34.946: INFO: Got endpoints: latency-svc-rts79 [1.656654679s] May 15 23:40:34.962: INFO: Created: latency-svc-jpvmn May 15 23:40:34.975: INFO: Got endpoints: latency-svc-jpvmn [1.554799099s] May 15 23:40:35.028: INFO: Created: latency-svc-vtkqc May 15 23:40:35.044: INFO: Got endpoints: latency-svc-vtkqc [1.553434305s] May 15 23:40:35.094: INFO: Created: latency-svc-gh8xd May 15 23:40:35.102: INFO: Got endpoints: latency-svc-gh8xd [1.258846707s] May 15 23:40:35.143: INFO: Created: latency-svc-48mvh May 15 23:40:35.160: INFO: Got endpoints: latency-svc-48mvh [1.270454857s] May 15 23:40:35.179: INFO: Created: latency-svc-8mjv6 May 15 23:40:35.231: INFO: Got endpoints: latency-svc-8mjv6 [1.193342081s] May 15 23:40:35.256: INFO: Created: latency-svc-xhr5z May 15 23:40:35.271: INFO: Got endpoints: latency-svc-xhr5z [1.218782863s] May 15 23:40:35.310: INFO: Created: latency-svc-259b5 May 15 23:40:35.381: INFO: Got endpoints: latency-svc-259b5 [1.197738897s] May 15 23:40:35.382: INFO: Created: latency-svc-gqwbf May 15 23:40:35.406: INFO: Got endpoints: latency-svc-gqwbf [1.149305743s] May 15 23:40:35.442: INFO: Created: latency-svc-4hmd9 May 15 23:40:35.452: INFO: Got endpoints: latency-svc-4hmd9 [1.057629548s] May 15 23:40:35.472: INFO: Created: latency-svc-rg7jv May 15 23:40:35.524: INFO: Got endpoints: latency-svc-rg7jv [1.011630696s] May 15 23:40:35.551: INFO: Created: latency-svc-hkl5l May 15 23:40:35.561: INFO: Got endpoints: latency-svc-hkl5l [985.979073ms] May 15 23:40:35.580: INFO: Created: latency-svc-2hk9p May 15 23:40:35.591: INFO: Got endpoints: latency-svc-2hk9p [913.379849ms] May 15 23:40:35.610: INFO: Created: latency-svc-tkv7b May 15 23:40:35.621: INFO: Got endpoints: latency-svc-tkv7b [859.094916ms] May 15 23:40:35.718: INFO: Created: latency-svc-2vlmk May 15 23:40:35.729: INFO: Got endpoints: latency-svc-2vlmk [918.624134ms] May 15 23:40:35.748: INFO: Created: latency-svc-5z9pv May 15 23:40:35.759: INFO: Got endpoints: latency-svc-5z9pv [813.854507ms] May 15 23:40:35.778: INFO: Created: latency-svc-nrqn4 May 15 23:40:35.831: INFO: Got endpoints: latency-svc-nrqn4 [855.515278ms] May 15 23:40:35.856: INFO: Created: latency-svc-4csf9 May 15 23:40:35.873: INFO: Got endpoints: latency-svc-4csf9 [829.49811ms] May 15 23:40:35.910: INFO: Created: latency-svc-rns6b May 15 23:40:35.923: INFO: Got endpoints: latency-svc-rns6b [820.666716ms] May 15 23:40:35.981: INFO: Created: latency-svc-vd984 May 15 23:40:35.989: INFO: Got endpoints: latency-svc-vd984 [828.44882ms] May 15 23:40:36.018: INFO: Created: latency-svc-f687s May 15 23:40:36.031: INFO: Got endpoints: latency-svc-f687s [800.059846ms] May 15 23:40:36.061: INFO: Created: latency-svc-ssmww May 15 23:40:36.132: INFO: Got endpoints: latency-svc-ssmww [861.202782ms] May 15 23:40:36.138: INFO: Created: latency-svc-qzljc May 15 23:40:36.153: INFO: Got endpoints: latency-svc-qzljc [771.321404ms] May 15 23:40:36.216: INFO: Created: latency-svc-sjgx2 May 15 23:40:36.303: INFO: Got endpoints: latency-svc-sjgx2 [897.109418ms] May 15 23:40:36.306: INFO: Created: latency-svc-sbjrj May 15 23:40:36.315: INFO: Got endpoints: latency-svc-sbjrj [862.443456ms] May 15 23:40:36.336: INFO: Created: latency-svc-lmnsx May 15 23:40:36.346: INFO: Got endpoints: latency-svc-lmnsx [821.959494ms] May 15 23:40:36.366: INFO: Created: latency-svc-hhnm2 May 15 23:40:36.384: INFO: Got endpoints: latency-svc-hhnm2 [822.669035ms] May 15 23:40:36.458: INFO: Created: latency-svc-hbzqw May 15 23:40:36.510: INFO: Got endpoints: latency-svc-hbzqw [918.671841ms] May 15 23:40:36.511: INFO: Created: latency-svc-h5rft May 15 23:40:36.596: INFO: Got endpoints: latency-svc-h5rft [975.218348ms] May 15 23:40:36.655: INFO: Created: latency-svc-bhbzd May 15 23:40:36.678: INFO: Got endpoints: latency-svc-bhbzd [949.006501ms] May 15 23:40:36.734: INFO: Created: latency-svc-kkfrl May 15 23:40:36.739: INFO: Got endpoints: latency-svc-kkfrl [979.168004ms] May 15 23:40:36.768: INFO: Created: latency-svc-d4flw May 15 23:40:36.783: INFO: Got endpoints: latency-svc-d4flw [952.256891ms] May 15 23:40:36.816: INFO: Created: latency-svc-r9p9b May 15 23:40:36.890: INFO: Got endpoints: latency-svc-r9p9b [1.016594078s] May 15 23:40:36.936: INFO: Created: latency-svc-rt5wg May 15 23:40:36.961: INFO: Got endpoints: latency-svc-rt5wg [1.038140105s] May 15 23:40:37.041: INFO: Created: latency-svc-28xnk May 15 23:40:37.062: INFO: Got endpoints: latency-svc-28xnk [1.073200219s] May 15 23:40:37.122: INFO: Created: latency-svc-kwmvv May 15 23:40:37.135: INFO: Got endpoints: latency-svc-kwmvv [1.103756811s] May 15 23:40:37.218: INFO: Created: latency-svc-258wr May 15 23:40:37.231: INFO: Got endpoints: latency-svc-258wr [1.099143278s] May 15 23:40:37.266: INFO: Created: latency-svc-vxfx5 May 15 23:40:37.280: INFO: Got endpoints: latency-svc-vxfx5 [1.126943432s] May 15 23:40:37.382: INFO: Created: latency-svc-q95dm May 15 23:40:37.389: INFO: Got endpoints: latency-svc-q95dm [1.08611321s] May 15 23:40:37.416: INFO: Created: latency-svc-slwhf May 15 23:40:37.440: INFO: Got endpoints: latency-svc-slwhf [1.12578367s] May 15 23:40:37.513: INFO: Created: latency-svc-gql4n May 15 23:40:37.548: INFO: Got endpoints: latency-svc-gql4n [1.201895668s] May 15 23:40:37.548: INFO: Created: latency-svc-8xltj May 15 23:40:37.687: INFO: Got endpoints: latency-svc-8xltj [1.302896171s] May 15 23:40:37.770: INFO: Created: latency-svc-m8tdx May 15 23:40:37.848: INFO: Got endpoints: latency-svc-m8tdx [1.338687387s] May 15 23:40:37.873: INFO: Created: latency-svc-g5slx May 15 23:40:37.899: INFO: Got endpoints: latency-svc-g5slx [1.302493418s] May 15 23:40:37.932: INFO: Created: latency-svc-w4nzf May 15 23:40:37.986: INFO: Got endpoints: latency-svc-w4nzf [1.30784474s] May 15 23:40:38.016: INFO: Created: latency-svc-m72rd May 15 23:40:38.031: INFO: Got endpoints: latency-svc-m72rd [1.292661691s] May 15 23:40:38.046: INFO: Created: latency-svc-jdsl6 May 15 23:40:38.082: INFO: Got endpoints: latency-svc-jdsl6 [1.298797217s] May 15 23:40:38.107: INFO: Created: latency-svc-5rdwb May 15 23:40:38.122: INFO: Got endpoints: latency-svc-5rdwb [1.232019548s] May 15 23:40:38.172: INFO: Created: latency-svc-qpgfk May 15 23:40:38.214: INFO: Got endpoints: latency-svc-qpgfk [1.252538826s] May 15 23:40:38.232: INFO: Created: latency-svc-bk6xr May 15 23:40:38.262: INFO: Got endpoints: latency-svc-bk6xr [1.199997873s] May 15 23:40:38.351: INFO: Created: latency-svc-dfjj7 May 15 23:40:38.356: INFO: Got endpoints: latency-svc-dfjj7 [1.220416055s] May 15 23:40:38.376: INFO: Created: latency-svc-8v7s9 May 15 23:40:38.391: INFO: Got endpoints: latency-svc-8v7s9 [1.159873176s] May 15 23:40:38.418: INFO: Created: latency-svc-gz9bl May 15 23:40:38.513: INFO: Got endpoints: latency-svc-gz9bl [1.233038602s] May 15 23:40:38.515: INFO: Created: latency-svc-hnt7m May 15 23:40:38.527: INFO: Got endpoints: latency-svc-hnt7m [1.137440255s] May 15 23:40:38.559: INFO: Created: latency-svc-gmw8n May 15 23:40:38.574: INFO: Got endpoints: latency-svc-gmw8n [1.133722561s] May 15 23:40:38.604: INFO: Created: latency-svc-z9kxn May 15 23:40:38.644: INFO: Got endpoints: latency-svc-z9kxn [1.096110631s] May 15 23:40:38.658: INFO: Created: latency-svc-h67xg May 15 23:40:38.676: INFO: Got endpoints: latency-svc-h67xg [989.091456ms] May 15 23:40:38.700: INFO: Created: latency-svc-4ttjq May 15 23:40:38.712: INFO: Got endpoints: latency-svc-4ttjq [863.563995ms] May 15 23:40:38.730: INFO: Created: latency-svc-xxhxb May 15 23:40:38.742: INFO: Got endpoints: latency-svc-xxhxb [843.107725ms] May 15 23:40:38.839: INFO: Created: latency-svc-hvtff May 15 23:40:38.862: INFO: Got endpoints: latency-svc-hvtff [875.764674ms] May 15 23:40:38.880: INFO: Created: latency-svc-b2dl5 May 15 23:40:38.904: INFO: Got endpoints: latency-svc-b2dl5 [872.676269ms] May 15 23:40:38.986: INFO: Created: latency-svc-7rwvx May 15 23:40:38.995: INFO: Got endpoints: latency-svc-7rwvx [912.542447ms] May 15 23:40:39.013: INFO: Created: latency-svc-n54dw May 15 23:40:39.025: INFO: Got endpoints: latency-svc-n54dw [902.707296ms] May 15 23:40:39.066: INFO: Created: latency-svc-km7d5 May 15 23:40:39.079: INFO: Got endpoints: latency-svc-km7d5 [865.197818ms] May 15 23:40:39.162: INFO: Created: latency-svc-zb9bf May 15 23:40:39.175: INFO: Got endpoints: latency-svc-zb9bf [912.747918ms] May 15 23:40:39.192: INFO: Created: latency-svc-shglc May 15 23:40:39.216: INFO: Got endpoints: latency-svc-shglc [860.410772ms] May 15 23:40:39.281: INFO: Created: latency-svc-lbm72 May 15 23:40:39.307: INFO: Got endpoints: latency-svc-lbm72 [915.292059ms] May 15 23:40:39.330: INFO: Created: latency-svc-jzgrz May 15 23:40:39.344: INFO: Got endpoints: latency-svc-jzgrz [831.361187ms] May 15 23:40:39.378: INFO: Created: latency-svc-fmw2x May 15 23:40:39.441: INFO: Got endpoints: latency-svc-fmw2x [914.203667ms] May 15 23:40:39.443: INFO: Created: latency-svc-89n4x May 15 23:40:39.452: INFO: Got endpoints: latency-svc-89n4x [878.259364ms] May 15 23:40:39.504: INFO: Created: latency-svc-w9f44 May 15 23:40:39.524: INFO: Got endpoints: latency-svc-w9f44 [879.005152ms] May 15 23:40:39.590: INFO: Created: latency-svc-jffz9 May 15 23:40:39.594: INFO: Got endpoints: latency-svc-jffz9 [917.831061ms] May 15 23:40:39.642: INFO: Created: latency-svc-wnjws May 15 23:40:39.651: INFO: Got endpoints: latency-svc-wnjws [939.275873ms] May 15 23:40:39.684: INFO: Created: latency-svc-c6wf5 May 15 23:40:39.752: INFO: Got endpoints: latency-svc-c6wf5 [1.010262495s] May 15 23:40:39.779: INFO: Created: latency-svc-bb72z May 15 23:40:39.785: INFO: Got endpoints: latency-svc-bb72z [922.851463ms] May 15 23:40:39.804: INFO: Created: latency-svc-qq8tv May 15 23:40:39.816: INFO: Got endpoints: latency-svc-qq8tv [911.320309ms] May 15 23:40:39.846: INFO: Created: latency-svc-f8m5m May 15 23:40:39.944: INFO: Got endpoints: latency-svc-f8m5m [949.705859ms] May 15 23:40:39.948: INFO: Created: latency-svc-jkk4s May 15 23:40:39.967: INFO: Got endpoints: latency-svc-jkk4s [941.758665ms] May 15 23:40:39.990: INFO: Created: latency-svc-j2kxp May 15 23:40:40.021: INFO: Got endpoints: latency-svc-j2kxp [942.11628ms] May 15 23:40:40.111: INFO: Created: latency-svc-cb2w8 May 15 23:40:40.135: INFO: Got endpoints: latency-svc-cb2w8 [960.145769ms] May 15 23:40:40.188: INFO: Created: latency-svc-w7k4q May 15 23:40:40.279: INFO: Got endpoints: latency-svc-w7k4q [1.063024922s] May 15 23:40:40.315: INFO: Created: latency-svc-ccvcg May 15 23:40:40.339: INFO: Got endpoints: latency-svc-ccvcg [1.03239662s] May 15 23:40:40.459: INFO: Created: latency-svc-b5h7l May 15 23:40:40.463: INFO: Got endpoints: latency-svc-b5h7l [1.11903225s] May 15 23:40:40.519: INFO: Created: latency-svc-mjbmh May 15 23:40:40.538: INFO: Got endpoints: latency-svc-mjbmh [1.096731934s] May 15 23:40:40.555: INFO: Created: latency-svc-mx2bn May 15 23:40:40.596: INFO: Got endpoints: latency-svc-mx2bn [1.143795654s] May 15 23:40:40.598: INFO: Created: latency-svc-mq4r4 May 15 23:40:40.627: INFO: Got endpoints: latency-svc-mq4r4 [1.103087715s] May 15 23:40:40.651: INFO: Created: latency-svc-cr9kp May 15 23:40:40.664: INFO: Got endpoints: latency-svc-cr9kp [1.070330983s] May 15 23:40:40.681: INFO: Created: latency-svc-8kqjh May 15 23:40:40.729: INFO: Got endpoints: latency-svc-8kqjh [1.077618931s] May 15 23:40:40.741: INFO: Created: latency-svc-rbdvw May 15 23:40:40.756: INFO: Got endpoints: latency-svc-rbdvw [1.003534541s] May 15 23:40:40.783: INFO: Created: latency-svc-ptdsk May 15 23:40:40.797: INFO: Got endpoints: latency-svc-ptdsk [1.012602617s] May 15 23:40:40.890: INFO: Created: latency-svc-flgs6 May 15 23:40:40.910: INFO: Got endpoints: latency-svc-flgs6 [1.093999322s] May 15 23:40:40.940: INFO: Created: latency-svc-lp8k8 May 15 23:40:40.954: INFO: Got endpoints: latency-svc-lp8k8 [1.009312317s] May 15 23:40:40.969: INFO: Created: latency-svc-rzzwx May 15 23:40:41.022: INFO: Got endpoints: latency-svc-rzzwx [1.055292073s] May 15 23:40:41.035: INFO: Created: latency-svc-lzqfl May 15 23:40:41.071: INFO: Got endpoints: latency-svc-lzqfl [1.049899524s] May 15 23:40:41.159: INFO: Created: latency-svc-8bf4d May 15 23:40:41.171: INFO: Got endpoints: latency-svc-8bf4d [1.035626187s] May 15 23:40:41.191: INFO: Created: latency-svc-vs8ld May 15 23:40:41.227: INFO: Got endpoints: latency-svc-vs8ld [947.658839ms] May 15 23:40:41.297: INFO: Created: latency-svc-rtj2x May 15 23:40:41.317: INFO: Got endpoints: latency-svc-rtj2x [977.651029ms] May 15 23:40:41.317: INFO: Created: latency-svc-szcv2 May 15 23:40:41.335: INFO: Got endpoints: latency-svc-szcv2 [871.682004ms] May 15 23:40:41.359: INFO: Created: latency-svc-xj8jv May 15 23:40:41.370: INFO: Got endpoints: latency-svc-xj8jv [831.909382ms] May 15 23:40:41.389: INFO: Created: latency-svc-dj758 May 15 23:40:41.440: INFO: Got endpoints: latency-svc-dj758 [843.977877ms] May 15 23:40:41.442: INFO: Created: latency-svc-bl4hv May 15 23:40:41.479: INFO: Got endpoints: latency-svc-bl4hv [852.052519ms] May 15 23:40:41.534: INFO: Created: latency-svc-xh5w5 May 15 23:40:41.591: INFO: Got endpoints: latency-svc-xh5w5 [926.235594ms] May 15 23:40:41.605: INFO: Created: latency-svc-xzm7d May 15 23:40:41.616: INFO: Got endpoints: latency-svc-xzm7d [886.538335ms] May 15 23:40:41.635: INFO: Created: latency-svc-cpgst May 15 23:40:41.646: INFO: Got endpoints: latency-svc-cpgst [889.9215ms] May 15 23:40:41.665: INFO: Created: latency-svc-8t76v May 15 23:40:41.676: INFO: Got endpoints: latency-svc-8t76v [878.915136ms] May 15 23:40:41.734: INFO: Created: latency-svc-qw4gf May 15 23:40:41.749: INFO: Got endpoints: latency-svc-qw4gf [839.41999ms] May 15 23:40:41.779: INFO: Created: latency-svc-rlngl May 15 23:40:41.791: INFO: Got endpoints: latency-svc-rlngl [837.452396ms] May 15 23:40:41.809: INFO: Created: latency-svc-k5bd4 May 15 23:40:41.821: INFO: Got endpoints: latency-svc-k5bd4 [799.349928ms] May 15 23:40:41.896: INFO: Created: latency-svc-w7xv7 May 15 23:40:41.947: INFO: Got endpoints: latency-svc-w7xv7 [876.266426ms] May 15 23:40:41.952: INFO: Created: latency-svc-hl6rt May 15 23:40:41.971: INFO: Got endpoints: latency-svc-hl6rt [799.777892ms] May 15 23:40:42.033: INFO: Created: latency-svc-fdskp May 15 23:40:42.037: INFO: Got endpoints: latency-svc-fdskp [810.357795ms] May 15 23:40:42.061: INFO: Created: latency-svc-q5jgx May 15 23:40:42.075: INFO: Got endpoints: latency-svc-q5jgx [757.931913ms] May 15 23:40:42.091: INFO: Created: latency-svc-44dhd May 15 23:40:42.105: INFO: Got endpoints: latency-svc-44dhd [770.200565ms] May 15 23:40:42.105: INFO: Latencies: [104.181003ms 274.479152ms 294.571027ms 352.397606ms 437.752311ms 473.735258ms 609.776053ms 654.340611ms 738.33064ms 757.931913ms 770.200565ms 771.321404ms 774.891379ms 799.349928ms 799.777892ms 800.059846ms 810.357795ms 813.854507ms 820.666716ms 821.959494ms 822.669035ms 828.44882ms 829.49811ms 831.361187ms 831.909382ms 837.452396ms 839.41999ms 843.107725ms 843.977877ms 852.052519ms 855.515278ms 859.094916ms 860.410772ms 861.202782ms 862.443456ms 863.563995ms 865.197818ms 871.682004ms 872.676269ms 872.82132ms 875.764674ms 876.266426ms 878.259364ms 878.915136ms 879.005152ms 886.538335ms 889.9215ms 897.109418ms 902.707296ms 911.320309ms 912.542447ms 912.747918ms 913.379849ms 914.203667ms 915.292059ms 917.831061ms 918.624134ms 918.671841ms 919.782639ms 922.851463ms 926.235594ms 939.275873ms 941.758665ms 942.11628ms 947.658839ms 949.006501ms 949.705859ms 952.256891ms 960.145769ms 975.218348ms 977.651029ms 979.168004ms 985.979073ms 989.091456ms 1.003534541s 1.009312317s 1.010262495s 1.011630696s 1.012602617s 1.016508325s 1.016594078s 1.030760501s 1.03239662s 1.035626187s 1.038140105s 1.049899524s 1.055292073s 1.057629548s 1.063024922s 1.06609845s 1.070330983s 1.073200219s 1.076547178s 1.077618931s 1.08611321s 1.089857209s 1.093999322s 1.095400697s 1.095414661s 1.096110631s 1.096731934s 1.099143278s 1.099772865s 1.100557845s 1.102292095s 1.103087715s 1.103756811s 1.104429654s 1.11903225s 1.12578367s 1.126943432s 1.129212332s 1.129466639s 1.130052335s 1.133722561s 1.134740851s 1.137440255s 1.143795654s 1.144844393s 1.149162826s 1.149305743s 1.159087465s 1.15912754s 1.159873176s 1.160452133s 1.160947069s 1.166244274s 1.16722605s 1.167351312s 1.171675835s 1.172428998s 1.173131093s 1.18130307s 1.191030855s 1.191080232s 1.193342081s 1.197738897s 1.19871385s 1.199997873s 1.201895668s 1.211802108s 1.214827607s 1.218782863s 1.220416055s 1.221425557s 1.232019548s 1.233038602s 1.23316551s 1.239301153s 1.239315812s 1.252538826s 1.258846707s 1.270454857s 1.290868412s 1.292661691s 1.295080232s 1.298797217s 1.302493418s 1.302896171s 1.30784474s 1.32247788s 1.338687387s 1.344527927s 1.382552659s 1.403685349s 1.412317172s 1.419583296s 1.423163773s 1.452832903s 1.477052901s 1.531566538s 1.533577249s 1.553434305s 1.554799099s 1.55628754s 1.569251843s 1.569283976s 1.576011295s 1.600865118s 1.616134197s 1.622257814s 1.630594263s 1.634983157s 1.639233862s 1.64964712s 1.656654679s 1.658811876s 1.683057902s 1.695087595s 1.721500571s 1.724021589s 1.728376686s 1.754569851s 1.757343113s 1.76570503s 1.774244959s 1.779067397s 1.787952846s 1.82518739s 1.861359354s] May 15 23:40:42.105: INFO: 50 %ile: 1.096731934s May 15 23:40:42.106: INFO: 90 %ile: 1.622257814s May 15 23:40:42.106: INFO: 99 %ile: 1.82518739s May 15 23:40:42.106: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:40:42.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6516" for this suite. • [SLOW TEST:19.558 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":9,"skipped":213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:40:42.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 23:40:42.557: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d7e5ac3-5042-41e6-8e15-07839f16595d" in namespace "projected-4564" to be "Succeeded or Failed" May 15 23:40:42.632: INFO: Pod "downwardapi-volume-6d7e5ac3-5042-41e6-8e15-07839f16595d": Phase="Pending", Reason="", readiness=false. Elapsed: 74.91048ms May 15 23:40:44.879: INFO: Pod "downwardapi-volume-6d7e5ac3-5042-41e6-8e15-07839f16595d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322106498s May 15 23:40:46.884: INFO: Pod "downwardapi-volume-6d7e5ac3-5042-41e6-8e15-07839f16595d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327437247s STEP: Saw pod success May 15 23:40:46.884: INFO: Pod "downwardapi-volume-6d7e5ac3-5042-41e6-8e15-07839f16595d" satisfied condition "Succeeded or Failed" May 15 23:40:46.888: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6d7e5ac3-5042-41e6-8e15-07839f16595d container client-container: STEP: delete the pod May 15 23:40:47.049: INFO: Waiting for pod downwardapi-volume-6d7e5ac3-5042-41e6-8e15-07839f16595d to disappear May 15 23:40:47.138: INFO: Pod downwardapi-volume-6d7e5ac3-5042-41e6-8e15-07839f16595d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:40:47.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4564" for this suite. • [SLOW TEST:5.103 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":294,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:40:47.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 15 23:40:47.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6574' May 15 23:40:47.969: INFO: stderr: "" May 15 23:40:47.969: INFO: stdout: "pod/pause created\n" May 15 23:40:47.969: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 15 23:40:47.969: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6574" to be "running and ready" May 15 23:40:48.184: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 214.613167ms May 15 23:40:50.321: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351351294s May 15 23:40:52.328: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.358684138s May 15 23:40:52.328: INFO: Pod "pause" satisfied condition "running and ready" May 15 23:40:52.328: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 15 23:40:52.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6574' May 15 23:40:52.452: INFO: stderr: "" May 15 23:40:52.452: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 15 23:40:52.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6574' May 15 23:40:52.619: INFO: stderr: "" May 15 23:40:52.619: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 15 23:40:52.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6574' May 15 23:40:52.764: INFO: stderr: "" May 15 23:40:52.764: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 15 23:40:52.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6574' May 15 23:40:52.865: INFO: stderr: "" May 15 23:40:52.865: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 15 23:40:52.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6574' May 15 23:40:53.333: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 23:40:53.333: INFO: stdout: "pod \"pause\" force deleted\n" May 15 23:40:53.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6574' May 15 23:40:53.975: INFO: stderr: "No resources found in kubectl-6574 namespace.\n" May 15 23:40:53.975: INFO: stdout: "" May 15 23:40:53.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6574 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 23:40:54.146: INFO: stderr: "" May 15 23:40:54.146: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:40:54.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6574" for this suite. • [SLOW TEST:7.222 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":11,"skipped":307,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:40:54.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-15c11533-1b40-436f-9c18-bf471cb74bc9 STEP: Creating a pod to test consume configMaps May 15 23:40:54.723: INFO: Waiting up to 5m0s for pod "pod-configmaps-aaa13aaf-d49a-4e18-aeba-28ca015a8dea" in namespace "configmap-9441" to be "Succeeded or Failed" May 15 23:40:54.761: INFO: Pod "pod-configmaps-aaa13aaf-d49a-4e18-aeba-28ca015a8dea": Phase="Pending", Reason="", readiness=false. Elapsed: 37.558522ms May 15 23:40:56.902: INFO: Pod "pod-configmaps-aaa13aaf-d49a-4e18-aeba-28ca015a8dea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178409882s May 15 23:40:59.604: INFO: Pod "pod-configmaps-aaa13aaf-d49a-4e18-aeba-28ca015a8dea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.880650616s STEP: Saw pod success May 15 23:40:59.604: INFO: Pod "pod-configmaps-aaa13aaf-d49a-4e18-aeba-28ca015a8dea" satisfied condition "Succeeded or Failed" May 15 23:40:59.613: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-aaa13aaf-d49a-4e18-aeba-28ca015a8dea container configmap-volume-test: STEP: delete the pod May 15 23:41:00.104: INFO: Waiting for pod pod-configmaps-aaa13aaf-d49a-4e18-aeba-28ca015a8dea to disappear May 15 23:41:00.279: INFO: Pod pod-configmaps-aaa13aaf-d49a-4e18-aeba-28ca015a8dea no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:41:00.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9441" for this suite. • [SLOW TEST:5.913 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":316,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:41:00.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:41:17.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5163" for this suite. • [SLOW TEST:16.679 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":13,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:41:17.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 15 23:41:25.271: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 23:41:25.328: INFO: Pod pod-with-prestop-http-hook still exists May 15 23:41:27.328: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 23:41:27.333: INFO: Pod pod-with-prestop-http-hook still exists May 15 23:41:29.328: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 23:41:29.332: INFO: Pod pod-with-prestop-http-hook still exists May 15 23:41:31.328: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 23:41:31.333: INFO: Pod pod-with-prestop-http-hook still exists May 15 23:41:33.328: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 23:41:33.332: INFO: Pod pod-with-prestop-http-hook still exists May 15 23:41:35.328: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 23:41:35.332: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:41:35.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3442" for this suite. • [SLOW TEST:18.310 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":353,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:41:35.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 15 23:41:35.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6109' May 15 23:41:35.648: INFO: stderr: "" May 15 23:41:35.648: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 23:41:35.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6109' May 15 23:41:35.795: INFO: stderr: "" May 15 23:41:35.796: INFO: stdout: "update-demo-nautilus-cxwr5 update-demo-nautilus-htpc6 " May 15 23:41:35.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxwr5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:41:35.899: INFO: stderr: "" May 15 23:41:35.899: INFO: stdout: "" May 15 23:41:35.899: INFO: update-demo-nautilus-cxwr5 is created but not running May 15 23:41:40.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6109' May 15 23:41:41.027: INFO: stderr: "" May 15 23:41:41.027: INFO: stdout: "update-demo-nautilus-cxwr5 update-demo-nautilus-htpc6 " May 15 23:41:41.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxwr5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:41:41.134: INFO: stderr: "" May 15 23:41:41.134: INFO: stdout: "true" May 15 23:41:41.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxwr5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:41:41.405: INFO: stderr: "" May 15 23:41:41.405: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 23:41:41.405: INFO: validating pod update-demo-nautilus-cxwr5 May 15 23:41:41.435: INFO: got data: { "image": "nautilus.jpg" } May 15 23:41:41.435: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 23:41:41.435: INFO: update-demo-nautilus-cxwr5 is verified up and running May 15 23:41:41.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htpc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:41:41.538: INFO: stderr: "" May 15 23:41:41.538: INFO: stdout: "true" May 15 23:41:41.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htpc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:41:41.645: INFO: stderr: "" May 15 23:41:41.645: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 23:41:41.645: INFO: validating pod update-demo-nautilus-htpc6 May 15 23:41:41.666: INFO: got data: { "image": "nautilus.jpg" } May 15 23:41:41.666: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 23:41:41.666: INFO: update-demo-nautilus-htpc6 is verified up and running STEP: scaling down the replication controller May 15 23:41:41.668: INFO: scanned /root for discovery docs: May 15 23:41:41.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6109' May 15 23:41:42.855: INFO: stderr: "" May 15 23:41:42.855: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 23:41:42.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6109' May 15 23:41:42.973: INFO: stderr: "" May 15 23:41:42.973: INFO: stdout: "update-demo-nautilus-cxwr5 update-demo-nautilus-htpc6 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 23:41:47.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6109' May 15 23:41:48.087: INFO: stderr: "" May 15 23:41:48.087: INFO: stdout: "update-demo-nautilus-cxwr5 update-demo-nautilus-htpc6 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 23:41:53.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6109' May 15 23:41:53.204: INFO: stderr: "" May 15 23:41:53.204: INFO: stdout: "update-demo-nautilus-cxwr5 update-demo-nautilus-htpc6 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 23:41:58.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6109' May 15 23:41:58.310: INFO: stderr: "" May 15 23:41:58.310: INFO: stdout: "update-demo-nautilus-htpc6 " May 15 23:41:58.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htpc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:41:58.416: INFO: stderr: "" May 15 23:41:58.416: INFO: stdout: "true" May 15 23:41:58.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htpc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:41:58.550: INFO: stderr: "" May 15 23:41:58.550: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 23:41:58.550: INFO: validating pod update-demo-nautilus-htpc6 May 15 23:41:58.553: INFO: got data: { "image": "nautilus.jpg" } May 15 23:41:58.553: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 23:41:58.553: INFO: update-demo-nautilus-htpc6 is verified up and running STEP: scaling up the replication controller May 15 23:41:58.555: INFO: scanned /root for discovery docs: May 15 23:41:58.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6109' May 15 23:41:59.724: INFO: stderr: "" May 15 23:41:59.724: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 23:41:59.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6109' May 15 23:41:59.822: INFO: stderr: "" May 15 23:41:59.822: INFO: stdout: "update-demo-nautilus-htpc6 update-demo-nautilus-qz48v " May 15 23:41:59.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htpc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:41:59.922: INFO: stderr: "" May 15 23:41:59.922: INFO: stdout: "true" May 15 23:41:59.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htpc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:42:00.109: INFO: stderr: "" May 15 23:42:00.109: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 23:42:00.109: INFO: validating pod update-demo-nautilus-htpc6 May 15 23:42:00.113: INFO: got data: { "image": "nautilus.jpg" } May 15 23:42:00.113: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 23:42:00.113: INFO: update-demo-nautilus-htpc6 is verified up and running May 15 23:42:00.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qz48v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:42:00.207: INFO: stderr: "" May 15 23:42:00.207: INFO: stdout: "" May 15 23:42:00.207: INFO: update-demo-nautilus-qz48v is created but not running May 15 23:42:05.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6109' May 15 23:42:05.314: INFO: stderr: "" May 15 23:42:05.314: INFO: stdout: "update-demo-nautilus-htpc6 update-demo-nautilus-qz48v " May 15 23:42:05.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htpc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:42:05.415: INFO: stderr: "" May 15 23:42:05.415: INFO: stdout: "true" May 15 23:42:05.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htpc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:42:05.516: INFO: stderr: "" May 15 23:42:05.516: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 23:42:05.516: INFO: validating pod update-demo-nautilus-htpc6 May 15 23:42:05.519: INFO: got data: { "image": "nautilus.jpg" } May 15 23:42:05.519: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 23:42:05.519: INFO: update-demo-nautilus-htpc6 is verified up and running May 15 23:42:05.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qz48v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:42:05.607: INFO: stderr: "" May 15 23:42:05.607: INFO: stdout: "true" May 15 23:42:05.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qz48v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6109' May 15 23:42:05.704: INFO: stderr: "" May 15 23:42:05.704: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 23:42:05.704: INFO: validating pod update-demo-nautilus-qz48v May 15 23:42:05.707: INFO: got data: { "image": "nautilus.jpg" } May 15 23:42:05.707: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 23:42:05.707: INFO: update-demo-nautilus-qz48v is verified up and running STEP: using delete to clean up resources May 15 23:42:05.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6109' May 15 23:42:05.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 23:42:05.818: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 15 23:42:05.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6109' May 15 23:42:05.933: INFO: stderr: "No resources found in kubectl-6109 namespace.\n" May 15 23:42:05.933: INFO: stdout: "" May 15 23:42:05.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6109 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 23:42:06.036: INFO: stderr: "" May 15 23:42:06.036: INFO: stdout: "update-demo-nautilus-htpc6\nupdate-demo-nautilus-qz48v\n" May 15 23:42:06.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6109' May 15 23:42:06.651: INFO: stderr: "No resources found in kubectl-6109 namespace.\n" May 15 23:42:06.651: INFO: stdout: "" May 15 23:42:06.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6109 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 23:42:06.814: INFO: stderr: "" May 15 23:42:06.814: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:06.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6109" for this suite. • [SLOW TEST:31.475 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":15,"skipped":356,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:06.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 23:42:11.279: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:11.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2059" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":16,"skipped":366,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:11.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:42:11.446: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c0fca752-7913-445c-bb45-cb109e1476c3" in namespace "security-context-test-326" to be "Succeeded or Failed" May 15 23:42:11.550: INFO: Pod "busybox-privileged-false-c0fca752-7913-445c-bb45-cb109e1476c3": Phase="Pending", Reason="", readiness=false. Elapsed: 103.859538ms May 15 23:42:13.597: INFO: Pod "busybox-privileged-false-c0fca752-7913-445c-bb45-cb109e1476c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15173288s May 15 23:42:15.609: INFO: Pod "busybox-privileged-false-c0fca752-7913-445c-bb45-cb109e1476c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162808891s May 15 23:42:15.609: INFO: Pod "busybox-privileged-false-c0fca752-7913-445c-bb45-cb109e1476c3" satisfied condition "Succeeded or Failed" May 15 23:42:15.627: INFO: Got logs for pod "busybox-privileged-false-c0fca752-7913-445c-bb45-cb109e1476c3": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:15.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-326" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":17,"skipped":367,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:15.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 15 23:42:15.790: INFO: Waiting up to 5m0s for pod "downward-api-10154ff1-a72e-4046-99af-9134a3670230" in namespace "downward-api-8681" to be "Succeeded or Failed" May 15 23:42:15.794: INFO: Pod "downward-api-10154ff1-a72e-4046-99af-9134a3670230": Phase="Pending", Reason="", readiness=false. Elapsed: 3.791502ms May 15 23:42:17.989: INFO: Pod "downward-api-10154ff1-a72e-4046-99af-9134a3670230": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198806597s May 15 23:42:19.993: INFO: Pod "downward-api-10154ff1-a72e-4046-99af-9134a3670230": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.203105225s STEP: Saw pod success May 15 23:42:19.993: INFO: Pod "downward-api-10154ff1-a72e-4046-99af-9134a3670230" satisfied condition "Succeeded or Failed" May 15 23:42:19.997: INFO: Trying to get logs from node latest-worker2 pod downward-api-10154ff1-a72e-4046-99af-9134a3670230 container dapi-container: STEP: delete the pod May 15 23:42:20.056: INFO: Waiting for pod downward-api-10154ff1-a72e-4046-99af-9134a3670230 to disappear May 15 23:42:20.078: INFO: Pod downward-api-10154ff1-a72e-4046-99af-9134a3670230 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:20.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8681" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":369,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:20.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:42:20.559: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 15 23:42:20.574: INFO: Pod name sample-pod: Found 0 pods out of 1 May 15 23:42:25.578: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 23:42:25.578: INFO: Creating deployment "test-rolling-update-deployment" May 15 23:42:25.582: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 15 23:42:25.608: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 15 23:42:27.616: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 15 23:42:27.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182945, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182945, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182945, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182945, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 23:42:29.622: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 15 23:42:29.632: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5435 /apis/apps/v1/namespaces/deployment-5435/deployments/test-rolling-update-deployment 3b377bc5-cba4-4558-9816-6b81042c8a01 4996985 1 2020-05-15 23:42:25 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-15 23:42:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-15 23:42:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002451418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-15 23:42:25 +0000 UTC,LastTransitionTime:2020-05-15 23:42:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-15 23:42:29 +0000 UTC,LastTransitionTime:2020-05-15 23:42:25 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 15 23:42:29.635: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-5435 /apis/apps/v1/namespaces/deployment-5435/replicasets/test-rolling-update-deployment-df7bb669b 90bd42cc-328d-4ce4-b1c7-3c724f5db7b0 4996973 1 2020-05-15 23:42:25 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 3b377bc5-cba4-4558-9816-6b81042c8a01 0xc002451a50 0xc002451a51}] [] [{kube-controller-manager Update apps/v1 2020-05-15 23:42:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3b377bc5-cba4-4558-9816-6b81042c8a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002451bf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 15 23:42:29.635: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 15 23:42:29.635: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5435 /apis/apps/v1/namespaces/deployment-5435/replicasets/test-rolling-update-controller e42b236a-4358-4639-ac56-4d734cff4e03 4996984 2 2020-05-15 23:42:20 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 3b377bc5-cba4-4558-9816-6b81042c8a01 0xc002451867 0xc002451868}] [] [{e2e.test Update apps/v1 2020-05-15 23:42:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-15 23:42:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3b377bc5-cba4-4558-9816-6b81042c8a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002451988 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 23:42:29.639: INFO: Pod "test-rolling-update-deployment-df7bb669b-rd892" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-rd892 test-rolling-update-deployment-df7bb669b- deployment-5435 /api/v1/namespaces/deployment-5435/pods/test-rolling-update-deployment-df7bb669b-rd892 98ba3fe2-5b14-4fb5-9dc4-dbd5a3403589 4996972 0 2020-05-15 23:42:25 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 90bd42cc-328d-4ce4-b1c7-3c724f5db7b0 0xc0022ba4a0 0xc0022ba4a1}] [] [{kube-controller-manager Update v1 2020-05-15 23:42:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"90bd42cc-328d-4ce4-b1c7-3c724f5db7b0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-15 23:42:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-czkwm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-czkwm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-czkwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 23:42:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 23:42:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 23:42:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 23:42:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.91,StartTime:2020-05-15 23:42:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 23:42:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://4161da2da7e7c7343c4f9a5628461134c700dcb9706513ba734dd5a087c190be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:29.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5435" for this suite. • [SLOW TEST:9.325 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":19,"skipped":373,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:29.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1111.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1111.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 23:42:37.895: INFO: DNS probes using dns-1111/dns-test-385b2c7e-7bd8-4c7b-a8a9-4357aa1cab35 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:37.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1111" for this suite. • [SLOW TEST:8.351 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":20,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:38.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:38.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-311" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":21,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:38.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 23:42:39.260: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 23:42:41.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182959, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182959, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182959, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182959, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 23:42:43.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182959, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182959, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182959, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725182959, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 23:42:46.313: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:42:46.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:47.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3237" for this suite. STEP: Destroying namespace "webhook-3237-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.103 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":22,"skipped":430,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:47.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 15 23:42:47.767: INFO: Waiting up to 5m0s for pod "pod-974e7b6f-23af-49ea-ba2e-2cf2dd75396f" in namespace "emptydir-5037" to be "Succeeded or Failed" May 15 23:42:47.780: INFO: Pod "pod-974e7b6f-23af-49ea-ba2e-2cf2dd75396f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.120992ms May 15 23:42:49.831: INFO: Pod "pod-974e7b6f-23af-49ea-ba2e-2cf2dd75396f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06364863s May 15 23:42:51.836: INFO: Pod "pod-974e7b6f-23af-49ea-ba2e-2cf2dd75396f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06840375s STEP: Saw pod success May 15 23:42:51.836: INFO: Pod "pod-974e7b6f-23af-49ea-ba2e-2cf2dd75396f" satisfied condition "Succeeded or Failed" May 15 23:42:51.840: INFO: Trying to get logs from node latest-worker2 pod pod-974e7b6f-23af-49ea-ba2e-2cf2dd75396f container test-container: STEP: delete the pod May 15 23:42:51.889: INFO: Waiting for pod pod-974e7b6f-23af-49ea-ba2e-2cf2dd75396f to disappear May 15 23:42:51.897: INFO: Pod pod-974e7b6f-23af-49ea-ba2e-2cf2dd75396f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:51.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5037" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":431,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:51.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:42:51.990: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-801d8141-c2b8-4bec-bbeb-05b54c15ded7" in namespace "security-context-test-7759" to be "Succeeded or Failed" May 15 23:42:51.994: INFO: Pod "alpine-nnp-false-801d8141-c2b8-4bec-bbeb-05b54c15ded7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121511ms May 15 23:42:53.998: INFO: Pod "alpine-nnp-false-801d8141-c2b8-4bec-bbeb-05b54c15ded7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008764899s May 15 23:42:56.002: INFO: Pod "alpine-nnp-false-801d8141-c2b8-4bec-bbeb-05b54c15ded7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012567178s May 15 23:42:56.002: INFO: Pod "alpine-nnp-false-801d8141-c2b8-4bec-bbeb-05b54c15ded7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:42:56.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7759" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":24,"skipped":437,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:42:56.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 15 23:42:56.236: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 15 23:43:06.960: INFO: >>> kubeConfig: /root/.kube/config May 15 23:43:09.906: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:43:20.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9672" for this suite. • [SLOW TEST:24.745 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":25,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:43:20.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-1f54015f-9ab9-4143-9d49-c43fdc63f482 in namespace container-probe-3790 May 15 23:43:24.895: INFO: Started pod liveness-1f54015f-9ab9-4143-9d49-c43fdc63f482 in namespace container-probe-3790 STEP: checking the pod's current state and verifying that restartCount is present May 15 23:43:24.897: INFO: Initial restart count of pod liveness-1f54015f-9ab9-4143-9d49-c43fdc63f482 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:47:26.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3790" for this suite. • [SLOW TEST:245.721 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":26,"skipped":461,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:47:26.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 23:47:26.968: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1a10821-d833-48b7-ab4a-dcd43d4c21f2" in namespace "downward-api-9912" to be "Succeeded or Failed" May 15 23:47:27.004: INFO: Pod "downwardapi-volume-d1a10821-d833-48b7-ab4a-dcd43d4c21f2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.881804ms May 15 23:47:29.008: INFO: Pod "downwardapi-volume-d1a10821-d833-48b7-ab4a-dcd43d4c21f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039970843s May 15 23:47:31.060: INFO: Pod "downwardapi-volume-d1a10821-d833-48b7-ab4a-dcd43d4c21f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092611705s STEP: Saw pod success May 15 23:47:31.060: INFO: Pod "downwardapi-volume-d1a10821-d833-48b7-ab4a-dcd43d4c21f2" satisfied condition "Succeeded or Failed" May 15 23:47:31.064: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d1a10821-d833-48b7-ab4a-dcd43d4c21f2 container client-container: STEP: delete the pod May 15 23:47:31.115: INFO: Waiting for pod downwardapi-volume-d1a10821-d833-48b7-ab4a-dcd43d4c21f2 to disappear May 15 23:47:31.140: INFO: Pod downwardapi-volume-d1a10821-d833-48b7-ab4a-dcd43d4c21f2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:47:31.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9912" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":27,"skipped":501,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:47:31.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 15 23:47:31.278: INFO: Waiting up to 5m0s for pod "pod-383fedb0-9166-4415-aa9f-e7dd1d7093eb" in namespace "emptydir-5241" to be "Succeeded or Failed" May 15 23:47:31.298: INFO: Pod "pod-383fedb0-9166-4415-aa9f-e7dd1d7093eb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.557988ms May 15 23:47:33.354: INFO: Pod "pod-383fedb0-9166-4415-aa9f-e7dd1d7093eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075712904s May 15 23:47:35.358: INFO: Pod "pod-383fedb0-9166-4415-aa9f-e7dd1d7093eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079199593s STEP: Saw pod success May 15 23:47:35.358: INFO: Pod "pod-383fedb0-9166-4415-aa9f-e7dd1d7093eb" satisfied condition "Succeeded or Failed" May 15 23:47:35.360: INFO: Trying to get logs from node latest-worker2 pod pod-383fedb0-9166-4415-aa9f-e7dd1d7093eb container test-container: STEP: delete the pod May 15 23:47:35.404: INFO: Waiting for pod pod-383fedb0-9166-4415-aa9f-e7dd1d7093eb to disappear May 15 23:47:35.422: INFO: Pod pod-383fedb0-9166-4415-aa9f-e7dd1d7093eb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:47:35.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5241" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":28,"skipped":507,"failed":0} S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:47:35.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 15 23:47:40.199: INFO: Successfully updated pod "annotationupdate5bc98d9a-e496-491b-9c33-68cd098f6272" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:47:44.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-596" for this suite. • [SLOW TEST:8.824 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:47:44.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 15 23:47:48.356: INFO: Pod pod-hostip-cec6df64-835a-4eb4-b4e2-b78aa15ce033 has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:47:48.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8460" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":30,"skipped":527,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:47:48.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 15 23:47:48.426: INFO: Waiting up to 5m0s for pod "pod-c41c2eff-5554-4314-8d66-1f83630b76d5" in namespace "emptydir-1461" to be "Succeeded or Failed" May 15 23:47:48.462: INFO: Pod "pod-c41c2eff-5554-4314-8d66-1f83630b76d5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.450707ms May 15 23:47:50.466: INFO: Pod "pod-c41c2eff-5554-4314-8d66-1f83630b76d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039365287s May 15 23:47:52.470: INFO: Pod "pod-c41c2eff-5554-4314-8d66-1f83630b76d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043472036s STEP: Saw pod success May 15 23:47:52.470: INFO: Pod "pod-c41c2eff-5554-4314-8d66-1f83630b76d5" satisfied condition "Succeeded or Failed" May 15 23:47:52.473: INFO: Trying to get logs from node latest-worker pod pod-c41c2eff-5554-4314-8d66-1f83630b76d5 container test-container: STEP: delete the pod May 15 23:47:52.628: INFO: Waiting for pod pod-c41c2eff-5554-4314-8d66-1f83630b76d5 to disappear May 15 23:47:52.632: INFO: Pod pod-c41c2eff-5554-4314-8d66-1f83630b76d5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:47:52.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1461" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":547,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:47:52.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:47:52.715: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 15 23:47:55.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5959 create -f -' May 15 23:47:59.335: INFO: stderr: "" May 15 23:47:59.335: INFO: stdout: "e2e-test-crd-publish-openapi-5426-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 15 23:47:59.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5959 delete e2e-test-crd-publish-openapi-5426-crds test-cr' May 15 23:47:59.435: INFO: stderr: "" May 15 23:47:59.435: INFO: stdout: "e2e-test-crd-publish-openapi-5426-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 15 23:47:59.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5959 apply -f -' May 15 23:47:59.756: INFO: stderr: "" May 15 23:47:59.756: INFO: stdout: "e2e-test-crd-publish-openapi-5426-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 15 23:47:59.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5959 delete e2e-test-crd-publish-openapi-5426-crds test-cr' May 15 23:47:59.879: INFO: stderr: "" May 15 23:47:59.879: INFO: stdout: "e2e-test-crd-publish-openapi-5426-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 15 23:47:59.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5426-crds' May 15 23:48:00.134: INFO: stderr: "" May 15 23:48:00.134: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5426-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:02.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5959" for this suite. • [SLOW TEST:9.429 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":32,"skipped":549,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:02.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 23:48:02.619: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 23:48:04.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183282, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183282, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183282, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183282, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 23:48:07.662: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:07.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-743" for this suite. STEP: Destroying namespace "webhook-743-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.815 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":33,"skipped":555,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:07.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 15 23:48:07.988: INFO: Waiting up to 5m0s for pod "var-expansion-bff01648-b69d-4476-98cd-9b9e43857c74" in namespace "var-expansion-3014" to be "Succeeded or Failed" May 15 23:48:07.994: INFO: Pod "var-expansion-bff01648-b69d-4476-98cd-9b9e43857c74": Phase="Pending", Reason="", readiness=false. Elapsed: 5.950781ms May 15 23:48:10.247: INFO: Pod "var-expansion-bff01648-b69d-4476-98cd-9b9e43857c74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258593191s May 15 23:48:12.251: INFO: Pod "var-expansion-bff01648-b69d-4476-98cd-9b9e43857c74": Phase="Running", Reason="", readiness=true. Elapsed: 4.263079756s May 15 23:48:14.256: INFO: Pod "var-expansion-bff01648-b69d-4476-98cd-9b9e43857c74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.268339828s STEP: Saw pod success May 15 23:48:14.256: INFO: Pod "var-expansion-bff01648-b69d-4476-98cd-9b9e43857c74" satisfied condition "Succeeded or Failed" May 15 23:48:14.260: INFO: Trying to get logs from node latest-worker2 pod var-expansion-bff01648-b69d-4476-98cd-9b9e43857c74 container dapi-container: STEP: delete the pod May 15 23:48:14.301: INFO: Waiting for pod var-expansion-bff01648-b69d-4476-98cd-9b9e43857c74 to disappear May 15 23:48:14.322: INFO: Pod var-expansion-bff01648-b69d-4476-98cd-9b9e43857c74 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:14.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3014" for this suite. • [SLOW TEST:6.449 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":558,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:14.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:25.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9488" for this suite. • [SLOW TEST:11.293 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":35,"skipped":563,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:25.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-0c4974f7-6bb5-4719-bd21-7334ea0b8a9a STEP: Creating a pod to test consume secrets May 15 23:48:25.726: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-88c99fe5-d4e0-47b6-a6c2-2a5d86d28bbc" in namespace "projected-6328" to be "Succeeded or Failed" May 15 23:48:25.744: INFO: Pod "pod-projected-secrets-88c99fe5-d4e0-47b6-a6c2-2a5d86d28bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.988476ms May 15 23:48:27.748: INFO: Pod "pod-projected-secrets-88c99fe5-d4e0-47b6-a6c2-2a5d86d28bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022671129s May 15 23:48:29.751: INFO: Pod "pod-projected-secrets-88c99fe5-d4e0-47b6-a6c2-2a5d86d28bbc": Phase="Running", Reason="", readiness=true. Elapsed: 4.025670047s May 15 23:48:31.756: INFO: Pod "pod-projected-secrets-88c99fe5-d4e0-47b6-a6c2-2a5d86d28bbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029935026s STEP: Saw pod success May 15 23:48:31.756: INFO: Pod "pod-projected-secrets-88c99fe5-d4e0-47b6-a6c2-2a5d86d28bbc" satisfied condition "Succeeded or Failed" May 15 23:48:31.758: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-88c99fe5-d4e0-47b6-a6c2-2a5d86d28bbc container projected-secret-volume-test: STEP: delete the pod May 15 23:48:31.844: INFO: Waiting for pod pod-projected-secrets-88c99fe5-d4e0-47b6-a6c2-2a5d86d28bbc to disappear May 15 23:48:31.929: INFO: Pod pod-projected-secrets-88c99fe5-d4e0-47b6-a6c2-2a5d86d28bbc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:31.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6328" for this suite. • [SLOW TEST:6.314 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":36,"skipped":565,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:31.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0515 23:48:33.202303 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 23:48:33.202: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:33.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7309" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":37,"skipped":575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:33.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:41.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4724" for this suite. • [SLOW TEST:8.284 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":38,"skipped":620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:41.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 15 23:48:41.571: INFO: Waiting up to 5m0s for pod "pod-18b2a71e-9398-487b-982e-63e4fddebfd9" in namespace "emptydir-607" to be "Succeeded or Failed" May 15 23:48:41.582: INFO: Pod "pod-18b2a71e-9398-487b-982e-63e4fddebfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.567552ms May 15 23:48:43.600: INFO: Pod "pod-18b2a71e-9398-487b-982e-63e4fddebfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029070717s May 15 23:48:45.604: INFO: Pod "pod-18b2a71e-9398-487b-982e-63e4fddebfd9": Phase="Running", Reason="", readiness=true. Elapsed: 4.033332431s May 15 23:48:47.607: INFO: Pod "pod-18b2a71e-9398-487b-982e-63e4fddebfd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036591199s STEP: Saw pod success May 15 23:48:47.607: INFO: Pod "pod-18b2a71e-9398-487b-982e-63e4fddebfd9" satisfied condition "Succeeded or Failed" May 15 23:48:47.610: INFO: Trying to get logs from node latest-worker2 pod pod-18b2a71e-9398-487b-982e-63e4fddebfd9 container test-container: STEP: delete the pod May 15 23:48:47.638: INFO: Waiting for pod pod-18b2a71e-9398-487b-982e-63e4fddebfd9 to disappear May 15 23:48:47.666: INFO: Pod pod-18b2a71e-9398-487b-982e-63e4fddebfd9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:47.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-607" for this suite. • [SLOW TEST:6.177 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:47.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 15 23:48:47.757: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:47.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2530" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":40,"skipped":678,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:47.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 23:48:47.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a11aac7a-c2de-48d6-86f2-b7a1f1a7355b" in namespace "downward-api-977" to be "Succeeded or Failed" May 15 23:48:47.983: INFO: Pod "downwardapi-volume-a11aac7a-c2de-48d6-86f2-b7a1f1a7355b": Phase="Pending", Reason="", readiness=false. Elapsed: 56.805781ms May 15 23:48:50.091: INFO: Pod "downwardapi-volume-a11aac7a-c2de-48d6-86f2-b7a1f1a7355b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165454965s May 15 23:48:52.175: INFO: Pod "downwardapi-volume-a11aac7a-c2de-48d6-86f2-b7a1f1a7355b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.249182569s STEP: Saw pod success May 15 23:48:52.175: INFO: Pod "downwardapi-volume-a11aac7a-c2de-48d6-86f2-b7a1f1a7355b" satisfied condition "Succeeded or Failed" May 15 23:48:52.178: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a11aac7a-c2de-48d6-86f2-b7a1f1a7355b container client-container: STEP: delete the pod May 15 23:48:52.218: INFO: Waiting for pod downwardapi-volume-a11aac7a-c2de-48d6-86f2-b7a1f1a7355b to disappear May 15 23:48:52.230: INFO: Pod downwardapi-volume-a11aac7a-c2de-48d6-86f2-b7a1f1a7355b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:52.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-977" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":41,"skipped":682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:52.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 15 23:48:52.888: INFO: created pod pod-service-account-defaultsa May 15 23:48:52.888: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 15 23:48:52.893: INFO: created pod pod-service-account-mountsa May 15 23:48:52.893: INFO: pod pod-service-account-mountsa service account token volume mount: true May 15 23:48:52.915: INFO: created pod pod-service-account-nomountsa May 15 23:48:52.915: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 15 23:48:52.977: INFO: created pod pod-service-account-defaultsa-mountspec May 15 23:48:52.977: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 15 23:48:53.148: INFO: created pod pod-service-account-mountsa-mountspec May 15 23:48:53.148: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 15 23:48:53.163: INFO: created pod pod-service-account-nomountsa-mountspec May 15 23:48:53.163: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 15 23:48:53.329: INFO: created pod pod-service-account-defaultsa-nomountspec May 15 23:48:53.329: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 15 23:48:53.378: INFO: created pod pod-service-account-mountsa-nomountspec May 15 23:48:53.378: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 15 23:48:53.410: INFO: created pod pod-service-account-nomountsa-nomountspec May 15 23:48:53.410: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:48:53.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3381" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":42,"skipped":809,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:48:53.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-mjj7l in namespace proxy-7189 I0515 23:48:53.886439 7 runners.go:190] Created replication controller with name: proxy-service-mjj7l, namespace: proxy-7189, replica count: 1 I0515 23:48:54.936830 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:48:55.937048 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:48:56.937331 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:48:57.937608 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:48:58.937845 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:48:59.938074 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:49:00.938349 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:49:01.938599 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:49:02.938897 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:49:03.939116 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:49:04.939277 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 23:49:05.939497 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 23:49:06.939794 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 23:49:07.940020 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 23:49:08.940218 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 23:49:09.940469 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 23:49:10.940713 7 runners.go:190] proxy-service-mjj7l Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 23:49:10.943: INFO: setup took 17.231676708s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 15 23:49:10.955: INFO: (0) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 11.134155ms) May 15 23:49:10.955: INFO: (0) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 11.061479ms) May 15 23:49:10.956: INFO: (0) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 11.89692ms) May 15 23:49:10.958: INFO: (0) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 14.249773ms) May 15 23:49:10.958: INFO: (0) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 14.099829ms) May 15 23:49:10.958: INFO: (0) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 14.474229ms) May 15 23:49:10.958: INFO: (0) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 14.210461ms) May 15 23:49:10.958: INFO: (0) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 14.635531ms) May 15 23:49:10.958: INFO: (0) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 14.773925ms) May 15 23:49:10.958: INFO: (0) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 15.146119ms) May 15 23:49:10.959: INFO: (0) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 15.168118ms) May 15 23:49:10.964: INFO: (0) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 19.798647ms) May 15 23:49:10.964: INFO: (0) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 20.226072ms) May 15 23:49:10.964: INFO: (0) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 20.166283ms) May 15 23:49:10.964: INFO: (0) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 20.094542ms) May 15 23:49:10.967: INFO: (0) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: ... (200; 27.071434ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 27.157431ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test<... (200; 27.268075ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 27.415048ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 27.414025ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 27.46459ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 27.409821ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 27.486298ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 27.481111ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 27.538084ms) May 15 23:49:10.994: INFO: (1) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 27.603293ms) May 15 23:49:10.995: INFO: (1) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 28.01656ms) May 15 23:49:11.021: INFO: (2) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 25.662506ms) May 15 23:49:11.021: INFO: (2) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test<... (200; 26.089502ms) May 15 23:49:11.022: INFO: (2) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 27.235273ms) May 15 23:49:11.023: INFO: (2) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 27.978667ms) May 15 23:49:11.023: INFO: (2) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 28.057412ms) May 15 23:49:11.023: INFO: (2) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 28.386227ms) May 15 23:49:11.023: INFO: (2) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 28.314901ms) May 15 23:49:11.023: INFO: (2) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 28.350957ms) May 15 23:49:11.023: INFO: (2) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 28.322164ms) May 15 23:49:11.023: INFO: (2) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 28.455209ms) May 15 23:49:11.023: INFO: (2) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 28.578608ms) May 15 23:49:11.023: INFO: (2) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 28.515337ms) May 15 23:49:11.024: INFO: (2) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 28.799663ms) May 15 23:49:11.024: INFO: (2) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 28.754138ms) May 15 23:49:11.024: INFO: (2) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 28.74431ms) May 15 23:49:11.028: INFO: (3) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 4.086013ms) May 15 23:49:11.029: INFO: (3) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 5.050448ms) May 15 23:49:11.029: INFO: (3) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 5.036967ms) May 15 23:49:11.029: INFO: (3) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 5.186225ms) May 15 23:49:11.029: INFO: (3) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 5.263126ms) May 15 23:49:11.029: INFO: (3) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 5.283446ms) May 15 23:49:11.029: INFO: (3) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 5.233627ms) May 15 23:49:11.029: INFO: (3) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 5.382907ms) May 15 23:49:11.030: INFO: (3) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: ... (200; 4.565702ms) May 15 23:49:11.036: INFO: (4) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.84577ms) May 15 23:49:11.036: INFO: (4) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 5.016626ms) May 15 23:49:11.036: INFO: (4) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 4.766764ms) May 15 23:49:11.036: INFO: (4) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 5.026803ms) May 15 23:49:11.036: INFO: (4) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 5.324859ms) May 15 23:49:11.036: INFO: (4) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 4.885662ms) May 15 23:49:11.036: INFO: (4) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 5.513808ms) May 15 23:49:11.036: INFO: (4) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 5.351517ms) May 15 23:49:11.036: INFO: (4) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test (200; 3.744237ms) May 15 23:49:11.040: INFO: (5) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 3.80766ms) May 15 23:49:11.040: INFO: (5) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 3.770656ms) May 15 23:49:11.040: INFO: (5) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 3.933094ms) May 15 23:49:11.040: INFO: (5) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 4.102955ms) May 15 23:49:11.040: INFO: (5) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 4.061631ms) May 15 23:49:11.040: INFO: (5) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.176189ms) May 15 23:49:11.041: INFO: (5) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 4.67021ms) May 15 23:49:11.041: INFO: (5) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 5.121288ms) May 15 23:49:11.042: INFO: (5) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 5.149124ms) May 15 23:49:11.044: INFO: (6) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 2.806157ms) May 15 23:49:11.045: INFO: (6) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test (200; 4.135277ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.077133ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 4.21312ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 4.151767ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 4.19959ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 4.368241ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 4.323485ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 4.607168ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 4.531791ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 4.5992ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 4.545724ms) May 15 23:49:11.046: INFO: (6) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 4.554397ms) May 15 23:49:11.049: INFO: (7) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 2.221029ms) May 15 23:49:11.049: INFO: (7) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 2.96536ms) May 15 23:49:11.050: INFO: (7) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 3.386817ms) May 15 23:49:11.050: INFO: (7) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 3.720421ms) May 15 23:49:11.050: INFO: (7) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 4.032592ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.157132ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 4.213846ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 4.278801ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 4.283986ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 4.437559ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 4.508235ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 4.767862ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 4.710059ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 4.726149ms) May 15 23:49:11.051: INFO: (7) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: ... (200; 2.187564ms) May 15 23:49:11.054: INFO: (8) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 2.402465ms) May 15 23:49:11.055: INFO: (8) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 3.671792ms) May 15 23:49:11.055: INFO: (8) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 3.701002ms) May 15 23:49:11.055: INFO: (8) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 3.647154ms) May 15 23:49:11.055: INFO: (8) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 3.769039ms) May 15 23:49:11.055: INFO: (8) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test<... (200; 3.981608ms) May 15 23:49:11.055: INFO: (8) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 3.956497ms) May 15 23:49:11.055: INFO: (8) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 4.147289ms) May 15 23:49:11.056: INFO: (8) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 4.7904ms) May 15 23:49:11.056: INFO: (8) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 5.034431ms) May 15 23:49:11.056: INFO: (8) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 4.999069ms) May 15 23:49:11.057: INFO: (8) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 5.678879ms) May 15 23:49:11.057: INFO: (8) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 5.758715ms) May 15 23:49:11.057: INFO: (8) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 5.729508ms) May 15 23:49:11.060: INFO: (9) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 2.7657ms) May 15 23:49:11.060: INFO: (9) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 2.933058ms) May 15 23:49:11.061: INFO: (9) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.351202ms) May 15 23:49:11.061: INFO: (9) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 4.382452ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 4.450592ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 4.564332ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 4.702972ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.712888ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 4.770455ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 4.718486ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 4.921216ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 4.994493ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 5.016509ms) May 15 23:49:11.062: INFO: (9) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test (200; 2.7799ms) May 15 23:49:11.066: INFO: (10) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 3.077017ms) May 15 23:49:11.066: INFO: (10) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 3.259681ms) May 15 23:49:11.066: INFO: (10) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 3.358864ms) May 15 23:49:11.067: INFO: (10) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 3.386737ms) May 15 23:49:11.067: INFO: (10) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 3.442987ms) May 15 23:49:11.067: INFO: (10) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test<... (200; 3.112686ms) May 15 23:49:11.071: INFO: (11) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 3.359333ms) May 15 23:49:11.071: INFO: (11) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 3.672071ms) May 15 23:49:11.071: INFO: (11) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 3.789483ms) May 15 23:49:11.071: INFO: (11) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 3.787065ms) May 15 23:49:11.071: INFO: (11) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 4.235729ms) May 15 23:49:11.071: INFO: (11) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 4.285778ms) May 15 23:49:11.071: INFO: (11) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test (200; 4.279263ms) May 15 23:49:11.072: INFO: (11) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 4.282317ms) May 15 23:49:11.072: INFO: (11) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 4.322117ms) May 15 23:49:11.072: INFO: (11) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 4.410872ms) May 15 23:49:11.076: INFO: (12) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 3.845088ms) May 15 23:49:11.076: INFO: (12) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 3.814162ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.914071ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 4.941031ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 4.921431ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 5.017476ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 4.97024ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test (200; 5.011186ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 5.023926ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 5.052324ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 4.996332ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 5.08646ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 5.174079ms) May 15 23:49:11.077: INFO: (12) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 5.224351ms) May 15 23:49:11.104: INFO: (13) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 26.972244ms) May 15 23:49:11.104: INFO: (13) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 27.041869ms) May 15 23:49:11.104: INFO: (13) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 27.038981ms) May 15 23:49:11.104: INFO: (13) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 27.160922ms) May 15 23:49:11.104: INFO: (13) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 27.184954ms) May 15 23:49:11.104: INFO: (13) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test<... (200; 27.329874ms) May 15 23:49:11.105: INFO: (13) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 28.269257ms) May 15 23:49:11.106: INFO: (13) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 28.633507ms) May 15 23:49:11.106: INFO: (13) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 28.85276ms) May 15 23:49:11.106: INFO: (13) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 29.172745ms) May 15 23:49:11.106: INFO: (13) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 29.38335ms) May 15 23:49:11.107: INFO: (13) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 30.078857ms) May 15 23:49:11.107: INFO: (13) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 30.292443ms) May 15 23:49:11.108: INFO: (13) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 30.520058ms) May 15 23:49:11.108: INFO: (13) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 30.66315ms) May 15 23:49:11.115: INFO: (14) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 6.624955ms) May 15 23:49:11.115: INFO: (14) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 6.75047ms) May 15 23:49:11.115: INFO: (14) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 6.78033ms) May 15 23:49:11.115: INFO: (14) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 6.765221ms) May 15 23:49:11.115: INFO: (14) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 6.882434ms) May 15 23:49:11.115: INFO: (14) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 6.903939ms) May 15 23:49:11.115: INFO: (14) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 7.397161ms) May 15 23:49:11.115: INFO: (14) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test (200; 7.47008ms) May 15 23:49:11.115: INFO: (14) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 7.709216ms) May 15 23:49:11.116: INFO: (14) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 7.686444ms) May 15 23:49:11.116: INFO: (14) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 7.881119ms) May 15 23:49:11.116: INFO: (14) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 7.913662ms) May 15 23:49:11.116: INFO: (14) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 7.854208ms) May 15 23:49:11.116: INFO: (14) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 8.055527ms) May 15 23:49:11.118: INFO: (15) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 1.803134ms) May 15 23:49:11.119: INFO: (15) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 2.497442ms) May 15 23:49:11.119: INFO: (15) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 2.917254ms) May 15 23:49:11.119: INFO: (15) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 2.672478ms) May 15 23:49:11.119: INFO: (15) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 3.151707ms) May 15 23:49:11.120: INFO: (15) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 2.950109ms) May 15 23:49:11.120: INFO: (15) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 3.658747ms) May 15 23:49:11.120: INFO: (15) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 2.813737ms) May 15 23:49:11.120: INFO: (15) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test (200; 2.948612ms) May 15 23:49:11.121: INFO: (15) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 4.795686ms) May 15 23:49:11.121: INFO: (15) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 4.972654ms) May 15 23:49:11.122: INFO: (15) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 4.498494ms) May 15 23:49:11.122: INFO: (15) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.686274ms) May 15 23:49:11.122: INFO: (15) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 5.627893ms) May 15 23:49:11.122: INFO: (15) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 5.637687ms) May 15 23:49:11.125: INFO: (16) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 2.431586ms) May 15 23:49:11.125: INFO: (16) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 2.428692ms) May 15 23:49:11.126: INFO: (16) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 4.102091ms) May 15 23:49:11.126: INFO: (16) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 4.264585ms) May 15 23:49:11.126: INFO: (16) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.297527ms) May 15 23:49:11.127: INFO: (16) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test<... (200; 2.168613ms) May 15 23:49:11.132: INFO: (17) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 4.416942ms) May 15 23:49:11.132: INFO: (17) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 4.396147ms) May 15 23:49:11.132: INFO: (17) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 4.452ms) May 15 23:49:11.132: INFO: (17) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 4.391024ms) May 15 23:49:11.132: INFO: (17) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 4.441421ms) May 15 23:49:11.132: INFO: (17) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 4.545219ms) May 15 23:49:11.132: INFO: (17) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 4.480838ms) May 15 23:49:11.133: INFO: (17) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 4.562236ms) May 15 23:49:11.133: INFO: (17) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 4.633221ms) May 15 23:49:11.133: INFO: (17) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 4.805097ms) May 15 23:49:11.133: INFO: (17) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: ... (200; 4.837702ms) May 15 23:49:11.133: INFO: (17) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 4.870498ms) May 15 23:49:11.133: INFO: (17) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 5.149015ms) May 15 23:49:11.136: INFO: (18) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 2.416404ms) May 15 23:49:11.136: INFO: (18) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 3.067177ms) May 15 23:49:11.136: INFO: (18) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 3.267334ms) May 15 23:49:11.137: INFO: (18) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 3.292111ms) May 15 23:49:11.137: INFO: (18) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 3.575812ms) May 15 23:49:11.137: INFO: (18) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:1080/proxy/: ... (200; 3.569156ms) May 15 23:49:11.137: INFO: (18) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:1080/proxy/: test<... (200; 3.598208ms) May 15 23:49:11.137: INFO: (18) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 3.55493ms) May 15 23:49:11.137: INFO: (18) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 3.678673ms) May 15 23:49:11.137: INFO: (18) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: ... (200; 2.396058ms) May 15 23:49:11.141: INFO: (19) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 2.568417ms) May 15 23:49:11.141: INFO: (19) /api/v1/namespaces/proxy-7189/pods/http:proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 2.613927ms) May 15 23:49:11.141: INFO: (19) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:160/proxy/: foo (200; 2.581482ms) May 15 23:49:11.141: INFO: (19) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg/proxy/: test (200; 2.756144ms) May 15 23:49:11.141: INFO: (19) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:443/proxy/: test<... (200; 2.926141ms) May 15 23:49:11.141: INFO: (19) /api/v1/namespaces/proxy-7189/pods/proxy-service-mjj7l-z8mhg:162/proxy/: bar (200; 2.88236ms) May 15 23:49:11.141: INFO: (19) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:462/proxy/: tls qux (200; 3.171852ms) May 15 23:49:11.141: INFO: (19) /api/v1/namespaces/proxy-7189/pods/https:proxy-service-mjj7l-z8mhg:460/proxy/: tls baz (200; 3.149403ms) May 15 23:49:11.142: INFO: (19) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname2/proxy/: tls qux (200; 4.071774ms) May 15 23:49:11.142: INFO: (19) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname2/proxy/: bar (200; 3.988718ms) May 15 23:49:11.142: INFO: (19) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname1/proxy/: foo (200; 4.089886ms) May 15 23:49:11.142: INFO: (19) /api/v1/namespaces/proxy-7189/services/proxy-service-mjj7l:portname2/proxy/: bar (200; 4.047922ms) May 15 23:49:11.142: INFO: (19) /api/v1/namespaces/proxy-7189/services/https:proxy-service-mjj7l:tlsportname1/proxy/: tls baz (200; 4.066333ms) May 15 23:49:11.142: INFO: (19) /api/v1/namespaces/proxy-7189/services/http:proxy-service-mjj7l:portname1/proxy/: foo (200; 4.114693ms) STEP: deleting ReplicationController proxy-service-mjj7l in namespace proxy-7189, will wait for the garbage collector to delete the pods May 15 23:49:11.201: INFO: Deleting ReplicationController proxy-service-mjj7l took: 7.199406ms May 15 23:49:11.502: INFO: Terminating ReplicationController proxy-service-mjj7l pods took: 300.202176ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:49:24.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7189" for this suite. • [SLOW TEST:31.364 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":43,"skipped":812,"failed":0} [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:49:24.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 15 23:49:25.789: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 15 23:49:27.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183365, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183365, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183365, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183365, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 23:49:29.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183365, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183365, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183365, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183365, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 23:49:32.854: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:49:32.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:49:34.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-940" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.376 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":44,"skipped":812,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:49:34.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3007 May 15 23:49:38.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3007 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 15 23:49:38.720: INFO: stderr: "I0515 23:49:38.595757 1198 log.go:172] (0xc000966790) (0xc00067bd60) Create stream\nI0515 23:49:38.595826 1198 log.go:172] (0xc000966790) (0xc00067bd60) Stream added, broadcasting: 1\nI0515 23:49:38.598394 1198 log.go:172] (0xc000966790) Reply frame received for 1\nI0515 23:49:38.598436 1198 log.go:172] (0xc000966790) (0xc000302f00) Create stream\nI0515 23:49:38.598447 1198 log.go:172] (0xc000966790) (0xc000302f00) Stream added, broadcasting: 3\nI0515 23:49:38.599365 1198 log.go:172] (0xc000966790) Reply frame received for 3\nI0515 23:49:38.599397 1198 log.go:172] (0xc000966790) (0xc000303180) Create stream\nI0515 23:49:38.599408 1198 log.go:172] (0xc000966790) (0xc000303180) Stream added, broadcasting: 5\nI0515 23:49:38.600260 1198 log.go:172] (0xc000966790) Reply frame received for 5\nI0515 23:49:38.689806 1198 log.go:172] (0xc000966790) Data frame received for 5\nI0515 23:49:38.689841 1198 log.go:172] (0xc000303180) (5) Data frame handling\nI0515 23:49:38.689862 1198 log.go:172] (0xc000303180) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0515 23:49:38.709967 1198 log.go:172] (0xc000966790) Data frame received for 3\nI0515 23:49:38.709996 1198 log.go:172] (0xc000302f00) (3) Data frame handling\nI0515 23:49:38.710017 1198 log.go:172] (0xc000302f00) (3) Data frame sent\nI0515 23:49:38.710844 1198 log.go:172] (0xc000966790) Data frame received for 3\nI0515 23:49:38.710864 1198 log.go:172] (0xc000302f00) (3) Data frame handling\nI0515 23:49:38.711269 1198 log.go:172] (0xc000966790) Data frame received for 5\nI0515 23:49:38.711302 1198 log.go:172] (0xc000303180) (5) Data frame handling\nI0515 23:49:38.713040 1198 log.go:172] (0xc000966790) Data frame received for 1\nI0515 23:49:38.713057 1198 log.go:172] (0xc00067bd60) (1) Data frame handling\nI0515 23:49:38.713070 1198 log.go:172] (0xc00067bd60) (1) Data frame sent\nI0515 23:49:38.713084 1198 log.go:172] (0xc000966790) (0xc00067bd60) Stream removed, broadcasting: 1\nI0515 23:49:38.713580 1198 log.go:172] (0xc000966790) Go away received\nI0515 23:49:38.713842 1198 log.go:172] (0xc000966790) (0xc00067bd60) Stream removed, broadcasting: 1\nI0515 23:49:38.713877 1198 log.go:172] (0xc000966790) (0xc000302f00) Stream removed, broadcasting: 3\nI0515 23:49:38.713899 1198 log.go:172] (0xc000966790) (0xc000303180) Stream removed, broadcasting: 5\n" May 15 23:49:38.720: INFO: stdout: "iptables" May 15 23:49:38.720: INFO: proxyMode: iptables May 15 23:49:38.727: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 23:49:38.744: INFO: Pod kube-proxy-mode-detector still exists May 15 23:49:40.744: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 23:49:40.749: INFO: Pod kube-proxy-mode-detector still exists May 15 23:49:42.745: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 23:49:42.749: INFO: Pod kube-proxy-mode-detector still exists May 15 23:49:44.744: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 23:49:44.750: INFO: Pod kube-proxy-mode-detector still exists May 15 23:49:46.744: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 23:49:46.748: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3007 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3007 I0515 23:49:46.802859 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3007, replica count: 3 I0515 23:49:49.853242 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:49:52.853439 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:49:55.853682 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 23:49:55.859: INFO: Creating new exec pod May 15 23:50:00.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3007 execpod-affinity6th6x -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 15 23:50:01.187: INFO: stderr: "I0515 23:50:01.096104 1218 log.go:172] (0xc000bd6e70) (0xc000316aa0) Create stream\nI0515 23:50:01.096167 1218 log.go:172] (0xc000bd6e70) (0xc000316aa0) Stream added, broadcasting: 1\nI0515 23:50:01.098741 1218 log.go:172] (0xc000bd6e70) Reply frame received for 1\nI0515 23:50:01.098785 1218 log.go:172] (0xc000bd6e70) (0xc0003d3ea0) Create stream\nI0515 23:50:01.098796 1218 log.go:172] (0xc000bd6e70) (0xc0003d3ea0) Stream added, broadcasting: 3\nI0515 23:50:01.099719 1218 log.go:172] (0xc000bd6e70) Reply frame received for 3\nI0515 23:50:01.099760 1218 log.go:172] (0xc000bd6e70) (0xc000560000) Create stream\nI0515 23:50:01.099770 1218 log.go:172] (0xc000bd6e70) (0xc000560000) Stream added, broadcasting: 5\nI0515 23:50:01.100766 1218 log.go:172] (0xc000bd6e70) Reply frame received for 5\nI0515 23:50:01.181485 1218 log.go:172] (0xc000bd6e70) Data frame received for 5\nI0515 23:50:01.181522 1218 log.go:172] (0xc000560000) (5) Data frame handling\nI0515 23:50:01.181536 1218 log.go:172] (0xc000560000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0515 23:50:01.181610 1218 log.go:172] (0xc000bd6e70) Data frame received for 3\nI0515 23:50:01.181618 1218 log.go:172] (0xc0003d3ea0) (3) Data frame handling\nI0515 23:50:01.181643 1218 log.go:172] (0xc000bd6e70) Data frame received for 5\nI0515 23:50:01.181655 1218 log.go:172] (0xc000560000) (5) Data frame handling\nI0515 23:50:01.182828 1218 log.go:172] (0xc000bd6e70) Data frame received for 1\nI0515 23:50:01.182842 1218 log.go:172] (0xc000316aa0) (1) Data frame handling\nI0515 23:50:01.182850 1218 log.go:172] (0xc000316aa0) (1) Data frame sent\nI0515 23:50:01.182863 1218 log.go:172] (0xc000bd6e70) (0xc000316aa0) Stream removed, broadcasting: 1\nI0515 23:50:01.182888 1218 log.go:172] (0xc000bd6e70) Go away received\nI0515 23:50:01.183156 1218 log.go:172] (0xc000bd6e70) (0xc000316aa0) Stream removed, broadcasting: 1\nI0515 23:50:01.183174 1218 log.go:172] (0xc000bd6e70) (0xc0003d3ea0) Stream removed, broadcasting: 3\nI0515 23:50:01.183187 1218 log.go:172] (0xc000bd6e70) (0xc000560000) Stream removed, broadcasting: 5\n" May 15 23:50:01.187: INFO: stdout: "" May 15 23:50:01.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3007 execpod-affinity6th6x -- /bin/sh -x -c nc -zv -t -w 2 10.105.53.227 80' May 15 23:50:01.400: INFO: stderr: "I0515 23:50:01.319881 1239 log.go:172] (0xc000913290) (0xc0008fe960) Create stream\nI0515 23:50:01.319923 1239 log.go:172] (0xc000913290) (0xc0008fe960) Stream added, broadcasting: 1\nI0515 23:50:01.322469 1239 log.go:172] (0xc000913290) Reply frame received for 1\nI0515 23:50:01.322551 1239 log.go:172] (0xc000913290) (0xc00098e5a0) Create stream\nI0515 23:50:01.322599 1239 log.go:172] (0xc000913290) (0xc00098e5a0) Stream added, broadcasting: 3\nI0515 23:50:01.324227 1239 log.go:172] (0xc000913290) Reply frame received for 3\nI0515 23:50:01.324269 1239 log.go:172] (0xc000913290) (0xc0008fe000) Create stream\nI0515 23:50:01.324282 1239 log.go:172] (0xc000913290) (0xc0008fe000) Stream added, broadcasting: 5\nI0515 23:50:01.325080 1239 log.go:172] (0xc000913290) Reply frame received for 5\nI0515 23:50:01.393867 1239 log.go:172] (0xc000913290) Data frame received for 5\nI0515 23:50:01.393902 1239 log.go:172] (0xc0008fe000) (5) Data frame handling\nI0515 23:50:01.393919 1239 log.go:172] (0xc0008fe000) (5) Data frame sent\nI0515 23:50:01.393936 1239 log.go:172] (0xc000913290) Data frame received for 5\nI0515 23:50:01.393943 1239 log.go:172] (0xc0008fe000) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.53.227 80\nConnection to 10.105.53.227 80 port [tcp/http] succeeded!\nI0515 23:50:01.393952 1239 log.go:172] (0xc000913290) Data frame received for 3\nI0515 23:50:01.393985 1239 log.go:172] (0xc00098e5a0) (3) Data frame handling\nI0515 23:50:01.395283 1239 log.go:172] (0xc000913290) Data frame received for 1\nI0515 23:50:01.395375 1239 log.go:172] (0xc0008fe960) (1) Data frame handling\nI0515 23:50:01.395420 1239 log.go:172] (0xc0008fe960) (1) Data frame sent\nI0515 23:50:01.395452 1239 log.go:172] (0xc000913290) (0xc0008fe960) Stream removed, broadcasting: 1\nI0515 23:50:01.395481 1239 log.go:172] (0xc000913290) Go away received\nI0515 23:50:01.395844 1239 log.go:172] (0xc000913290) (0xc0008fe960) Stream removed, broadcasting: 1\nI0515 23:50:01.395866 1239 log.go:172] (0xc000913290) (0xc00098e5a0) Stream removed, broadcasting: 3\nI0515 23:50:01.395878 1239 log.go:172] (0xc000913290) (0xc0008fe000) Stream removed, broadcasting: 5\n" May 15 23:50:01.400: INFO: stdout: "" May 15 23:50:01.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3007 execpod-affinity6th6x -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.53.227:80/ ; done' May 15 23:50:01.679: INFO: stderr: "I0515 23:50:01.531403 1258 log.go:172] (0xc000982e70) (0xc000b0c280) Create stream\nI0515 23:50:01.531469 1258 log.go:172] (0xc000982e70) (0xc000b0c280) Stream added, broadcasting: 1\nI0515 23:50:01.535894 1258 log.go:172] (0xc000982e70) Reply frame received for 1\nI0515 23:50:01.535963 1258 log.go:172] (0xc000982e70) (0xc00066bea0) Create stream\nI0515 23:50:01.535980 1258 log.go:172] (0xc000982e70) (0xc00066bea0) Stream added, broadcasting: 3\nI0515 23:50:01.536753 1258 log.go:172] (0xc000982e70) Reply frame received for 3\nI0515 23:50:01.536794 1258 log.go:172] (0xc000982e70) (0xc00060ec80) Create stream\nI0515 23:50:01.536807 1258 log.go:172] (0xc000982e70) (0xc00060ec80) Stream added, broadcasting: 5\nI0515 23:50:01.537812 1258 log.go:172] (0xc000982e70) Reply frame received for 5\nI0515 23:50:01.594103 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.594143 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.594157 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.594170 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.594177 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.594186 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.598738 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.598757 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.598776 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.599163 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.599191 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.599213 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.599231 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.599240 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.599247 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.603436 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.603450 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.603463 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.604134 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.604154 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.604164 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.604214 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.604227 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.604236 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.610401 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.610431 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.610448 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.611063 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.611095 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.611107 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\nI0515 23:50:01.611118 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.611127 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.611153 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\nI0515 23:50:01.611173 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.611185 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.611199 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.615774 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.615811 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.615847 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.616151 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.616178 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.616199 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.616312 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.616333 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.616357 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.620167 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.620219 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.620243 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.620583 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.620603 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.620610 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.620636 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.620664 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.620687 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.625394 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.625405 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.625412 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.626137 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.626147 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.626153 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.626160 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.626166 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.626172 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.630043 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.630055 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.630061 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.630639 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.630656 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.630665 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\nI0515 23:50:01.630672 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.630680 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.630689 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.630695 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.630703 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.630751 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\nI0515 23:50:01.634484 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.634511 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.634547 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.634957 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.634981 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.634990 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.634999 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.635007 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.635013 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.639278 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.639293 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.639304 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.639806 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.639827 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.639837 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.639864 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.639899 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.639927 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -sI0515 23:50:01.639946 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.639998 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.640040 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.644206 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.644245 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.644277 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.644702 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.644737 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.644761 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\nI0515 23:50:01.644786 1258 log.go:172] (0xc000982e70) Data frame received for 5\n+ echo\n+ curlI0515 23:50:01.644814 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.644870 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.644907 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.644926 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.644951 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.649017 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.649041 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.649067 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.649949 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.649980 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.649999 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\nI0515 23:50:01.650016 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.650026 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.650036 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.650050 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.650071 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.650098 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\nI0515 23:50:01.654008 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.654037 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.654082 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.654510 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.654549 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.654572 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.654591 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.654610 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.654630 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.658353 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.658373 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.658394 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.658860 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.658884 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.658894 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.658908 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.658926 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.658942 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.662991 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.663007 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.663041 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.663576 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.663604 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.663618 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.663636 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.663648 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.663660 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.667935 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.667966 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.667988 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.668706 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.668727 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.668750 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.668775 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.668798 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.668825 1258 log.go:172] (0xc00060ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.672807 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.672821 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.672830 1258 log.go:172] (0xc00066bea0) (3) Data frame sent\nI0515 23:50:01.673533 1258 log.go:172] (0xc000982e70) Data frame received for 5\nI0515 23:50:01.673551 1258 log.go:172] (0xc00060ec80) (5) Data frame handling\nI0515 23:50:01.673686 1258 log.go:172] (0xc000982e70) Data frame received for 3\nI0515 23:50:01.673711 1258 log.go:172] (0xc00066bea0) (3) Data frame handling\nI0515 23:50:01.675196 1258 log.go:172] (0xc000982e70) Data frame received for 1\nI0515 23:50:01.675214 1258 log.go:172] (0xc000b0c280) (1) Data frame handling\nI0515 23:50:01.675235 1258 log.go:172] (0xc000b0c280) (1) Data frame sent\nI0515 23:50:01.675379 1258 log.go:172] (0xc000982e70) (0xc000b0c280) Stream removed, broadcasting: 1\nI0515 23:50:01.675750 1258 log.go:172] (0xc000982e70) (0xc000b0c280) Stream removed, broadcasting: 1\nI0515 23:50:01.675781 1258 log.go:172] (0xc000982e70) (0xc00066bea0) Stream removed, broadcasting: 3\nI0515 23:50:01.675800 1258 log.go:172] (0xc000982e70) (0xc00060ec80) Stream removed, broadcasting: 5\n" May 15 23:50:01.680: INFO: stdout: "\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj\naffinity-clusterip-timeout-n92kj" May 15 23:50:01.680: INFO: Received response from host: May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Received response from host: affinity-clusterip-timeout-n92kj May 15 23:50:01.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3007 execpod-affinity6th6x -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.105.53.227:80/' May 15 23:50:01.892: INFO: stderr: "I0515 23:50:01.809041 1278 log.go:172] (0xc000b89290) (0xc000a7a780) Create stream\nI0515 23:50:01.809226 1278 log.go:172] (0xc000b89290) (0xc000a7a780) Stream added, broadcasting: 1\nI0515 23:50:01.813412 1278 log.go:172] (0xc000b89290) Reply frame received for 1\nI0515 23:50:01.813456 1278 log.go:172] (0xc000b89290) (0xc0006b21e0) Create stream\nI0515 23:50:01.813470 1278 log.go:172] (0xc000b89290) (0xc0006b21e0) Stream added, broadcasting: 3\nI0515 23:50:01.814415 1278 log.go:172] (0xc000b89290) Reply frame received for 3\nI0515 23:50:01.814438 1278 log.go:172] (0xc000b89290) (0xc00061e1e0) Create stream\nI0515 23:50:01.814454 1278 log.go:172] (0xc000b89290) (0xc00061e1e0) Stream added, broadcasting: 5\nI0515 23:50:01.815058 1278 log.go:172] (0xc000b89290) Reply frame received for 5\nI0515 23:50:01.879880 1278 log.go:172] (0xc000b89290) Data frame received for 5\nI0515 23:50:01.879905 1278 log.go:172] (0xc00061e1e0) (5) Data frame handling\nI0515 23:50:01.879923 1278 log.go:172] (0xc00061e1e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:01.885951 1278 log.go:172] (0xc000b89290) Data frame received for 3\nI0515 23:50:01.885973 1278 log.go:172] (0xc0006b21e0) (3) Data frame handling\nI0515 23:50:01.886002 1278 log.go:172] (0xc0006b21e0) (3) Data frame sent\nI0515 23:50:01.886305 1278 log.go:172] (0xc000b89290) Data frame received for 3\nI0515 23:50:01.886331 1278 log.go:172] (0xc0006b21e0) (3) Data frame handling\nI0515 23:50:01.886485 1278 log.go:172] (0xc000b89290) Data frame received for 5\nI0515 23:50:01.886523 1278 log.go:172] (0xc00061e1e0) (5) Data frame handling\nI0515 23:50:01.887883 1278 log.go:172] (0xc000b89290) Data frame received for 1\nI0515 23:50:01.887907 1278 log.go:172] (0xc000a7a780) (1) Data frame handling\nI0515 23:50:01.887932 1278 log.go:172] (0xc000a7a780) (1) Data frame sent\nI0515 23:50:01.887958 1278 log.go:172] (0xc000b89290) (0xc000a7a780) Stream removed, broadcasting: 1\nI0515 23:50:01.887985 1278 log.go:172] (0xc000b89290) Go away received\nI0515 23:50:01.888362 1278 log.go:172] (0xc000b89290) (0xc000a7a780) Stream removed, broadcasting: 1\nI0515 23:50:01.888378 1278 log.go:172] (0xc000b89290) (0xc0006b21e0) Stream removed, broadcasting: 3\nI0515 23:50:01.888387 1278 log.go:172] (0xc000b89290) (0xc00061e1e0) Stream removed, broadcasting: 5\n" May 15 23:50:01.893: INFO: stdout: "affinity-clusterip-timeout-n92kj" May 15 23:50:16.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3007 execpod-affinity6th6x -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.105.53.227:80/' May 15 23:50:17.141: INFO: stderr: "I0515 23:50:17.046614 1298 log.go:172] (0xc000be1d90) (0xc0006d9cc0) Create stream\nI0515 23:50:17.046674 1298 log.go:172] (0xc000be1d90) (0xc0006d9cc0) Stream added, broadcasting: 1\nI0515 23:50:17.050179 1298 log.go:172] (0xc000be1d90) Reply frame received for 1\nI0515 23:50:17.050220 1298 log.go:172] (0xc000be1d90) (0xc000671cc0) Create stream\nI0515 23:50:17.050231 1298 log.go:172] (0xc000be1d90) (0xc000671cc0) Stream added, broadcasting: 3\nI0515 23:50:17.051210 1298 log.go:172] (0xc000be1d90) Reply frame received for 3\nI0515 23:50:17.051262 1298 log.go:172] (0xc000be1d90) (0xc0006e8aa0) Create stream\nI0515 23:50:17.051279 1298 log.go:172] (0xc000be1d90) (0xc0006e8aa0) Stream added, broadcasting: 5\nI0515 23:50:17.052221 1298 log.go:172] (0xc000be1d90) Reply frame received for 5\nI0515 23:50:17.130705 1298 log.go:172] (0xc000be1d90) Data frame received for 5\nI0515 23:50:17.130736 1298 log.go:172] (0xc0006e8aa0) (5) Data frame handling\nI0515 23:50:17.130758 1298 log.go:172] (0xc0006e8aa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:17.134274 1298 log.go:172] (0xc000be1d90) Data frame received for 3\nI0515 23:50:17.134293 1298 log.go:172] (0xc000671cc0) (3) Data frame handling\nI0515 23:50:17.134319 1298 log.go:172] (0xc000671cc0) (3) Data frame sent\nI0515 23:50:17.134886 1298 log.go:172] (0xc000be1d90) Data frame received for 5\nI0515 23:50:17.134903 1298 log.go:172] (0xc0006e8aa0) (5) Data frame handling\nI0515 23:50:17.134921 1298 log.go:172] (0xc000be1d90) Data frame received for 3\nI0515 23:50:17.134949 1298 log.go:172] (0xc000671cc0) (3) Data frame handling\nI0515 23:50:17.136387 1298 log.go:172] (0xc000be1d90) Data frame received for 1\nI0515 23:50:17.136402 1298 log.go:172] (0xc0006d9cc0) (1) Data frame handling\nI0515 23:50:17.136412 1298 log.go:172] (0xc0006d9cc0) (1) Data frame sent\nI0515 23:50:17.136423 1298 log.go:172] (0xc000be1d90) (0xc0006d9cc0) Stream removed, broadcasting: 1\nI0515 23:50:17.136444 1298 log.go:172] (0xc000be1d90) Go away received\nI0515 23:50:17.136727 1298 log.go:172] (0xc000be1d90) (0xc0006d9cc0) Stream removed, broadcasting: 1\nI0515 23:50:17.136743 1298 log.go:172] (0xc000be1d90) (0xc000671cc0) Stream removed, broadcasting: 3\nI0515 23:50:17.136751 1298 log.go:172] (0xc000be1d90) (0xc0006e8aa0) Stream removed, broadcasting: 5\n" May 15 23:50:17.141: INFO: stdout: "affinity-clusterip-timeout-n92kj" May 15 23:50:32.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3007 execpod-affinity6th6x -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.105.53.227:80/' May 15 23:50:32.392: INFO: stderr: "I0515 23:50:32.273802 1318 log.go:172] (0xc00090c790) (0xc0003586e0) Create stream\nI0515 23:50:32.273860 1318 log.go:172] (0xc00090c790) (0xc0003586e0) Stream added, broadcasting: 1\nI0515 23:50:32.276329 1318 log.go:172] (0xc00090c790) Reply frame received for 1\nI0515 23:50:32.276375 1318 log.go:172] (0xc00090c790) (0xc0006a6e60) Create stream\nI0515 23:50:32.276395 1318 log.go:172] (0xc00090c790) (0xc0006a6e60) Stream added, broadcasting: 3\nI0515 23:50:32.277519 1318 log.go:172] (0xc00090c790) Reply frame received for 3\nI0515 23:50:32.277561 1318 log.go:172] (0xc00090c790) (0xc00064a6e0) Create stream\nI0515 23:50:32.277573 1318 log.go:172] (0xc00090c790) (0xc00064a6e0) Stream added, broadcasting: 5\nI0515 23:50:32.278654 1318 log.go:172] (0xc00090c790) Reply frame received for 5\nI0515 23:50:32.378456 1318 log.go:172] (0xc00090c790) Data frame received for 5\nI0515 23:50:32.378482 1318 log.go:172] (0xc00064a6e0) (5) Data frame handling\nI0515 23:50:32.378499 1318 log.go:172] (0xc00064a6e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:32.383260 1318 log.go:172] (0xc00090c790) Data frame received for 3\nI0515 23:50:32.383284 1318 log.go:172] (0xc0006a6e60) (3) Data frame handling\nI0515 23:50:32.383327 1318 log.go:172] (0xc0006a6e60) (3) Data frame sent\nI0515 23:50:32.384135 1318 log.go:172] (0xc00090c790) Data frame received for 5\nI0515 23:50:32.384184 1318 log.go:172] (0xc00064a6e0) (5) Data frame handling\nI0515 23:50:32.384222 1318 log.go:172] (0xc00090c790) Data frame received for 3\nI0515 23:50:32.384277 1318 log.go:172] (0xc0006a6e60) (3) Data frame handling\nI0515 23:50:32.385701 1318 log.go:172] (0xc00090c790) Data frame received for 1\nI0515 23:50:32.385752 1318 log.go:172] (0xc0003586e0) (1) Data frame handling\nI0515 23:50:32.385784 1318 log.go:172] (0xc0003586e0) (1) Data frame sent\nI0515 23:50:32.385818 1318 log.go:172] (0xc00090c790) (0xc0003586e0) Stream removed, broadcasting: 1\nI0515 23:50:32.385857 1318 log.go:172] (0xc00090c790) Go away received\nI0515 23:50:32.386315 1318 log.go:172] (0xc00090c790) (0xc0003586e0) Stream removed, broadcasting: 1\nI0515 23:50:32.386340 1318 log.go:172] (0xc00090c790) (0xc0006a6e60) Stream removed, broadcasting: 3\nI0515 23:50:32.386353 1318 log.go:172] (0xc00090c790) (0xc00064a6e0) Stream removed, broadcasting: 5\n" May 15 23:50:32.392: INFO: stdout: "affinity-clusterip-timeout-n92kj" May 15 23:50:47.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3007 execpod-affinity6th6x -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.105.53.227:80/' May 15 23:50:47.620: INFO: stderr: "I0515 23:50:47.551452 1340 log.go:172] (0xc000aaf080) (0xc00085fc20) Create stream\nI0515 23:50:47.551545 1340 log.go:172] (0xc000aaf080) (0xc00085fc20) Stream added, broadcasting: 1\nI0515 23:50:47.558307 1340 log.go:172] (0xc000aaf080) Reply frame received for 1\nI0515 23:50:47.558364 1340 log.go:172] (0xc000aaf080) (0xc000362500) Create stream\nI0515 23:50:47.558381 1340 log.go:172] (0xc000aaf080) (0xc000362500) Stream added, broadcasting: 3\nI0515 23:50:47.560277 1340 log.go:172] (0xc000aaf080) Reply frame received for 3\nI0515 23:50:47.560324 1340 log.go:172] (0xc000aaf080) (0xc000139e00) Create stream\nI0515 23:50:47.560336 1340 log.go:172] (0xc000aaf080) (0xc000139e00) Stream added, broadcasting: 5\nI0515 23:50:47.562375 1340 log.go:172] (0xc000aaf080) Reply frame received for 5\nI0515 23:50:47.610850 1340 log.go:172] (0xc000aaf080) Data frame received for 5\nI0515 23:50:47.610874 1340 log.go:172] (0xc000139e00) (5) Data frame handling\nI0515 23:50:47.610890 1340 log.go:172] (0xc000139e00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.53.227:80/\nI0515 23:50:47.613037 1340 log.go:172] (0xc000aaf080) Data frame received for 3\nI0515 23:50:47.613050 1340 log.go:172] (0xc000362500) (3) Data frame handling\nI0515 23:50:47.613061 1340 log.go:172] (0xc000362500) (3) Data frame sent\nI0515 23:50:47.613585 1340 log.go:172] (0xc000aaf080) Data frame received for 3\nI0515 23:50:47.613607 1340 log.go:172] (0xc000362500) (3) Data frame handling\nI0515 23:50:47.613907 1340 log.go:172] (0xc000aaf080) Data frame received for 5\nI0515 23:50:47.613925 1340 log.go:172] (0xc000139e00) (5) Data frame handling\nI0515 23:50:47.615049 1340 log.go:172] (0xc000aaf080) Data frame received for 1\nI0515 23:50:47.615072 1340 log.go:172] (0xc00085fc20) (1) Data frame handling\nI0515 23:50:47.615085 1340 log.go:172] (0xc00085fc20) (1) Data frame sent\nI0515 23:50:47.615102 1340 log.go:172] (0xc000aaf080) (0xc00085fc20) Stream removed, broadcasting: 1\nI0515 23:50:47.615133 1340 log.go:172] (0xc000aaf080) Go away received\nI0515 23:50:47.615609 1340 log.go:172] (0xc000aaf080) (0xc00085fc20) Stream removed, broadcasting: 1\nI0515 23:50:47.615639 1340 log.go:172] (0xc000aaf080) (0xc000362500) Stream removed, broadcasting: 3\nI0515 23:50:47.615652 1340 log.go:172] (0xc000aaf080) (0xc000139e00) Stream removed, broadcasting: 5\n" May 15 23:50:47.620: INFO: stdout: "affinity-clusterip-timeout-flzv2" May 15 23:50:47.620: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3007, will wait for the garbage collector to delete the pods May 15 23:50:47.739: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.254436ms May 15 23:50:48.239: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.233853ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:51:05.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3007" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:90.863 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":45,"skipped":832,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:51:05.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 23:51:05.566: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93431e43-18ec-4755-86c4-79bf4c8353fb" in namespace "projected-4870" to be "Succeeded or Failed" May 15 23:51:05.727: INFO: Pod "downwardapi-volume-93431e43-18ec-4755-86c4-79bf4c8353fb": Phase="Pending", Reason="", readiness=false. Elapsed: 161.06261ms May 15 23:51:07.939: INFO: Pod "downwardapi-volume-93431e43-18ec-4755-86c4-79bf4c8353fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37305781s May 15 23:51:09.944: INFO: Pod "downwardapi-volume-93431e43-18ec-4755-86c4-79bf4c8353fb": Phase="Running", Reason="", readiness=true. Elapsed: 4.377880203s May 15 23:51:11.948: INFO: Pod "downwardapi-volume-93431e43-18ec-4755-86c4-79bf4c8353fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.382307735s STEP: Saw pod success May 15 23:51:11.948: INFO: Pod "downwardapi-volume-93431e43-18ec-4755-86c4-79bf4c8353fb" satisfied condition "Succeeded or Failed" May 15 23:51:11.952: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-93431e43-18ec-4755-86c4-79bf4c8353fb container client-container: STEP: delete the pod May 15 23:51:12.024: INFO: Waiting for pod downwardapi-volume-93431e43-18ec-4755-86c4-79bf4c8353fb to disappear May 15 23:51:12.055: INFO: Pod downwardapi-volume-93431e43-18ec-4755-86c4-79bf4c8353fb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:51:12.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4870" for this suite. • [SLOW TEST:6.947 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":46,"skipped":847,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:51:12.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 15 23:51:16.191: INFO: &Pod{ObjectMeta:{send-events-79e20c8e-318c-4406-99f4-3d5ad64d055e events-8790 /api/v1/namespaces/events-8790/pods/send-events-79e20c8e-318c-4406-99f4-3d5ad64d055e 84c733d3-a228-4829-a886-805418bc04a5 4999941 0 2020-05-15 23:51:12 +0000 UTC map[name:foo time:156565688] map[] [] [] [{e2e.test Update v1 2020-05-15 23:51:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-15 23:51:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.76\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t9lr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t9lr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t9lr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 23:51:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 23:51:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 23:51:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 23:51:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.76,StartTime:2020-05-15 23:51:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 23:51:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://f0dcf684274095fb599e73c0ea8ab18993b9bc8a0ebae72f17263194158147cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.76,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 15 23:51:18.199: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 15 23:51:20.202: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:51:20.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8790" for this suite. • [SLOW TEST:8.145 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":47,"skipped":849,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:51:20.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-0c05cda2-9732-4566-99ef-503094d72fcf STEP: Creating a pod to test consume configMaps May 15 23:51:20.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-071e24f0-cce8-4004-b0c6-b1f013321e97" in namespace "configmap-8882" to be "Succeeded or Failed" May 15 23:51:20.340: INFO: Pod "pod-configmaps-071e24f0-cce8-4004-b0c6-b1f013321e97": Phase="Pending", Reason="", readiness=false. Elapsed: 13.978323ms May 15 23:51:22.374: INFO: Pod "pod-configmaps-071e24f0-cce8-4004-b0c6-b1f013321e97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048031406s May 15 23:51:24.378: INFO: Pod "pod-configmaps-071e24f0-cce8-4004-b0c6-b1f013321e97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052497542s STEP: Saw pod success May 15 23:51:24.379: INFO: Pod "pod-configmaps-071e24f0-cce8-4004-b0c6-b1f013321e97" satisfied condition "Succeeded or Failed" May 15 23:51:24.381: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-071e24f0-cce8-4004-b0c6-b1f013321e97 container configmap-volume-test: STEP: delete the pod May 15 23:51:24.489: INFO: Waiting for pod pod-configmaps-071e24f0-cce8-4004-b0c6-b1f013321e97 to disappear May 15 23:51:24.503: INFO: Pod pod-configmaps-071e24f0-cce8-4004-b0c6-b1f013321e97 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:51:24.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8882" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":48,"skipped":866,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:51:24.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-dd463009-42fc-409b-93f1-0ac11a21fd0b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-dd463009-42fc-409b-93f1-0ac11a21fd0b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:51:30.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5279" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":49,"skipped":882,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:51:30.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 23:51:35.122: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:51:35.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3156" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":50,"skipped":895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:51:35.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:51:35.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4716" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":51,"skipped":930,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:51:35.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 23:51:36.542: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 23:51:38.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183496, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183496, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183496, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725183496, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 23:51:41.837: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:51:41.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5955" for this suite. STEP: Destroying namespace "webhook-5955-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.658 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":52,"skipped":936,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:51:42.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2610 STEP: creating service affinity-nodeport in namespace services-2610 STEP: creating replication controller affinity-nodeport in namespace services-2610 I0515 23:51:42.219272 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-2610, replica count: 3 I0515 23:51:45.269679 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:51:48.269914 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 23:51:48.334: INFO: Creating new exec pod May 15 23:51:53.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2610 execpod-affinityxf8dk -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 15 23:51:53.679: INFO: stderr: "I0515 23:51:53.583291 1359 log.go:172] (0xc0009a5600) (0xc00068dd60) Create stream\nI0515 23:51:53.583359 1359 log.go:172] (0xc0009a5600) (0xc00068dd60) Stream added, broadcasting: 1\nI0515 23:51:53.587857 1359 log.go:172] (0xc0009a5600) Reply frame received for 1\nI0515 23:51:53.587904 1359 log.go:172] (0xc0009a5600) (0xc00066ef00) Create stream\nI0515 23:51:53.587918 1359 log.go:172] (0xc0009a5600) (0xc00066ef00) Stream added, broadcasting: 3\nI0515 23:51:53.588782 1359 log.go:172] (0xc0009a5600) Reply frame received for 3\nI0515 23:51:53.588825 1359 log.go:172] (0xc0009a5600) (0xc00030ae60) Create stream\nI0515 23:51:53.588836 1359 log.go:172] (0xc0009a5600) (0xc00030ae60) Stream added, broadcasting: 5\nI0515 23:51:53.589795 1359 log.go:172] (0xc0009a5600) Reply frame received for 5\nI0515 23:51:53.672105 1359 log.go:172] (0xc0009a5600) Data frame received for 3\nI0515 23:51:53.672139 1359 log.go:172] (0xc00066ef00) (3) Data frame handling\nI0515 23:51:53.672185 1359 log.go:172] (0xc0009a5600) Data frame received for 5\nI0515 23:51:53.672215 1359 log.go:172] (0xc00030ae60) (5) Data frame handling\nI0515 23:51:53.672240 1359 log.go:172] (0xc00030ae60) (5) Data frame sent\nI0515 23:51:53.672250 1359 log.go:172] (0xc0009a5600) Data frame received for 5\nI0515 23:51:53.672258 1359 log.go:172] (0xc00030ae60) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0515 23:51:53.672279 1359 log.go:172] (0xc00030ae60) (5) Data frame sent\nI0515 23:51:53.672287 1359 log.go:172] (0xc0009a5600) Data frame received for 5\nI0515 23:51:53.672294 1359 log.go:172] (0xc00030ae60) (5) Data frame handling\nI0515 23:51:53.674766 1359 log.go:172] (0xc0009a5600) Data frame received for 1\nI0515 23:51:53.674786 1359 log.go:172] (0xc00068dd60) (1) Data frame handling\nI0515 23:51:53.674815 1359 log.go:172] (0xc00068dd60) (1) Data frame sent\nI0515 23:51:53.674948 1359 log.go:172] (0xc0009a5600) (0xc00068dd60) Stream removed, broadcasting: 1\nI0515 23:51:53.675098 1359 log.go:172] (0xc0009a5600) Go away received\nI0515 23:51:53.675275 1359 log.go:172] (0xc0009a5600) (0xc00068dd60) Stream removed, broadcasting: 1\nI0515 23:51:53.675289 1359 log.go:172] (0xc0009a5600) (0xc00066ef00) Stream removed, broadcasting: 3\nI0515 23:51:53.675297 1359 log.go:172] (0xc0009a5600) (0xc00030ae60) Stream removed, broadcasting: 5\n" May 15 23:51:53.679: INFO: stdout: "" May 15 23:51:53.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2610 execpod-affinityxf8dk -- /bin/sh -x -c nc -zv -t -w 2 10.101.184.137 80' May 15 23:51:53.914: INFO: stderr: "I0515 23:51:53.822449 1378 log.go:172] (0xc00041bb80) (0xc0002eac80) Create stream\nI0515 23:51:53.822502 1378 log.go:172] (0xc00041bb80) (0xc0002eac80) Stream added, broadcasting: 1\nI0515 23:51:53.835478 1378 log.go:172] (0xc00041bb80) Reply frame received for 1\nI0515 23:51:53.835545 1378 log.go:172] (0xc00041bb80) (0xc00013b9a0) Create stream\nI0515 23:51:53.835560 1378 log.go:172] (0xc00041bb80) (0xc00013b9a0) Stream added, broadcasting: 3\nI0515 23:51:53.836551 1378 log.go:172] (0xc00041bb80) Reply frame received for 3\nI0515 23:51:53.836594 1378 log.go:172] (0xc00041bb80) (0xc00034f0e0) Create stream\nI0515 23:51:53.836610 1378 log.go:172] (0xc00041bb80) (0xc00034f0e0) Stream added, broadcasting: 5\nI0515 23:51:53.837976 1378 log.go:172] (0xc00041bb80) Reply frame received for 5\nI0515 23:51:53.906959 1378 log.go:172] (0xc00041bb80) Data frame received for 5\nI0515 23:51:53.906998 1378 log.go:172] (0xc00034f0e0) (5) Data frame handling\nI0515 23:51:53.907013 1378 log.go:172] (0xc00034f0e0) (5) Data frame sent\nI0515 23:51:53.907023 1378 log.go:172] (0xc00041bb80) Data frame received for 5\nI0515 23:51:53.907030 1378 log.go:172] (0xc00034f0e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.184.137 80\nConnection to 10.101.184.137 80 port [tcp/http] succeeded!\nI0515 23:51:53.907067 1378 log.go:172] (0xc00041bb80) Data frame received for 3\nI0515 23:51:53.907106 1378 log.go:172] (0xc00013b9a0) (3) Data frame handling\nI0515 23:51:53.908645 1378 log.go:172] (0xc00041bb80) Data frame received for 1\nI0515 23:51:53.908664 1378 log.go:172] (0xc0002eac80) (1) Data frame handling\nI0515 23:51:53.908681 1378 log.go:172] (0xc0002eac80) (1) Data frame sent\nI0515 23:51:53.908695 1378 log.go:172] (0xc00041bb80) (0xc0002eac80) Stream removed, broadcasting: 1\nI0515 23:51:53.908754 1378 log.go:172] (0xc00041bb80) Go away received\nI0515 23:51:53.909076 1378 log.go:172] (0xc00041bb80) (0xc0002eac80) Stream removed, broadcasting: 1\nI0515 23:51:53.909093 1378 log.go:172] (0xc00041bb80) (0xc00013b9a0) Stream removed, broadcasting: 3\nI0515 23:51:53.909103 1378 log.go:172] (0xc00041bb80) (0xc00034f0e0) Stream removed, broadcasting: 5\n" May 15 23:51:53.914: INFO: stdout: "" May 15 23:51:53.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2610 execpod-affinityxf8dk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30092' May 15 23:51:54.118: INFO: stderr: "I0515 23:51:54.039380 1399 log.go:172] (0xc00003a210) (0xc000151d60) Create stream\nI0515 23:51:54.039455 1399 log.go:172] (0xc00003a210) (0xc000151d60) Stream added, broadcasting: 1\nI0515 23:51:54.042006 1399 log.go:172] (0xc00003a210) Reply frame received for 1\nI0515 23:51:54.042057 1399 log.go:172] (0xc00003a210) (0xc0004488c0) Create stream\nI0515 23:51:54.042070 1399 log.go:172] (0xc00003a210) (0xc0004488c0) Stream added, broadcasting: 3\nI0515 23:51:54.042925 1399 log.go:172] (0xc00003a210) Reply frame received for 3\nI0515 23:51:54.042979 1399 log.go:172] (0xc00003a210) (0xc00069c500) Create stream\nI0515 23:51:54.043007 1399 log.go:172] (0xc00003a210) (0xc00069c500) Stream added, broadcasting: 5\nI0515 23:51:54.043861 1399 log.go:172] (0xc00003a210) Reply frame received for 5\nI0515 23:51:54.110522 1399 log.go:172] (0xc00003a210) Data frame received for 5\nI0515 23:51:54.110557 1399 log.go:172] (0xc00069c500) (5) Data frame handling\nI0515 23:51:54.110584 1399 log.go:172] (0xc00069c500) (5) Data frame sent\nI0515 23:51:54.110598 1399 log.go:172] (0xc00003a210) Data frame received for 5\nI0515 23:51:54.110608 1399 log.go:172] (0xc00069c500) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30092\nConnection to 172.17.0.13 30092 port [tcp/30092] succeeded!\nI0515 23:51:54.110635 1399 log.go:172] (0xc00069c500) (5) Data frame sent\nI0515 23:51:54.110768 1399 log.go:172] (0xc00003a210) Data frame received for 3\nI0515 23:51:54.110791 1399 log.go:172] (0xc0004488c0) (3) Data frame handling\nI0515 23:51:54.110957 1399 log.go:172] (0xc00003a210) Data frame received for 5\nI0515 23:51:54.110971 1399 log.go:172] (0xc00069c500) (5) Data frame handling\nI0515 23:51:54.112976 1399 log.go:172] (0xc00003a210) Data frame received for 1\nI0515 23:51:54.113048 1399 log.go:172] (0xc000151d60) (1) Data frame handling\nI0515 23:51:54.113080 1399 log.go:172] (0xc000151d60) (1) Data frame sent\nI0515 23:51:54.113101 1399 log.go:172] (0xc00003a210) (0xc000151d60) Stream removed, broadcasting: 1\nI0515 23:51:54.113278 1399 log.go:172] (0xc00003a210) Go away received\nI0515 23:51:54.113628 1399 log.go:172] (0xc00003a210) (0xc000151d60) Stream removed, broadcasting: 1\nI0515 23:51:54.113647 1399 log.go:172] (0xc00003a210) (0xc0004488c0) Stream removed, broadcasting: 3\nI0515 23:51:54.113655 1399 log.go:172] (0xc00003a210) (0xc00069c500) Stream removed, broadcasting: 5\n" May 15 23:51:54.119: INFO: stdout: "" May 15 23:51:54.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2610 execpod-affinityxf8dk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30092' May 15 23:51:54.343: INFO: stderr: "I0515 23:51:54.261321 1421 log.go:172] (0xc000a5cc60) (0xc0001528c0) Create stream\nI0515 23:51:54.261396 1421 log.go:172] (0xc000a5cc60) (0xc0001528c0) Stream added, broadcasting: 1\nI0515 23:51:54.264216 1421 log.go:172] (0xc000a5cc60) Reply frame received for 1\nI0515 23:51:54.264266 1421 log.go:172] (0xc000a5cc60) (0xc0001b60a0) Create stream\nI0515 23:51:54.264289 1421 log.go:172] (0xc000a5cc60) (0xc0001b60a0) Stream added, broadcasting: 3\nI0515 23:51:54.265381 1421 log.go:172] (0xc000a5cc60) Reply frame received for 3\nI0515 23:51:54.265418 1421 log.go:172] (0xc000a5cc60) (0xc0001b6820) Create stream\nI0515 23:51:54.265439 1421 log.go:172] (0xc000a5cc60) (0xc0001b6820) Stream added, broadcasting: 5\nI0515 23:51:54.266491 1421 log.go:172] (0xc000a5cc60) Reply frame received for 5\nI0515 23:51:54.334661 1421 log.go:172] (0xc000a5cc60) Data frame received for 5\nI0515 23:51:54.334715 1421 log.go:172] (0xc0001b6820) (5) Data frame handling\nI0515 23:51:54.334728 1421 log.go:172] (0xc0001b6820) (5) Data frame sent\nI0515 23:51:54.334737 1421 log.go:172] (0xc000a5cc60) Data frame received for 5\nI0515 23:51:54.334743 1421 log.go:172] (0xc0001b6820) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30092\nConnection to 172.17.0.12 30092 port [tcp/30092] succeeded!\nI0515 23:51:54.334751 1421 log.go:172] (0xc000a5cc60) Data frame received for 3\nI0515 23:51:54.334817 1421 log.go:172] (0xc0001b60a0) (3) Data frame handling\nI0515 23:51:54.336430 1421 log.go:172] (0xc000a5cc60) Data frame received for 1\nI0515 23:51:54.336526 1421 log.go:172] (0xc0001528c0) (1) Data frame handling\nI0515 23:51:54.336597 1421 log.go:172] (0xc0001528c0) (1) Data frame sent\nI0515 23:51:54.336615 1421 log.go:172] (0xc000a5cc60) (0xc0001528c0) Stream removed, broadcasting: 1\nI0515 23:51:54.336634 1421 log.go:172] (0xc000a5cc60) Go away received\nI0515 23:51:54.336992 1421 log.go:172] (0xc000a5cc60) (0xc0001528c0) Stream removed, broadcasting: 1\nI0515 23:51:54.337022 1421 log.go:172] (0xc000a5cc60) (0xc0001b60a0) Stream removed, broadcasting: 3\nI0515 23:51:54.337032 1421 log.go:172] (0xc000a5cc60) (0xc0001b6820) Stream removed, broadcasting: 5\n" May 15 23:51:54.343: INFO: stdout: "" May 15 23:51:54.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2610 execpod-affinityxf8dk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30092/ ; done' May 15 23:51:54.619: INFO: stderr: "I0515 23:51:54.465583 1441 log.go:172] (0xc000a79a20) (0xc000a005a0) Create stream\nI0515 23:51:54.465642 1441 log.go:172] (0xc000a79a20) (0xc000a005a0) Stream added, broadcasting: 1\nI0515 23:51:54.469954 1441 log.go:172] (0xc000a79a20) Reply frame received for 1\nI0515 23:51:54.470004 1441 log.go:172] (0xc000a79a20) (0xc000630dc0) Create stream\nI0515 23:51:54.470016 1441 log.go:172] (0xc000a79a20) (0xc000630dc0) Stream added, broadcasting: 3\nI0515 23:51:54.470838 1441 log.go:172] (0xc000a79a20) Reply frame received for 3\nI0515 23:51:54.470858 1441 log.go:172] (0xc000a79a20) (0xc0004aa0a0) Create stream\nI0515 23:51:54.470865 1441 log.go:172] (0xc000a79a20) (0xc0004aa0a0) Stream added, broadcasting: 5\nI0515 23:51:54.471528 1441 log.go:172] (0xc000a79a20) Reply frame received for 5\nI0515 23:51:54.521636 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.521663 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.521673 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.521692 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.521699 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.521707 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.524951 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.524979 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.525017 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.526097 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.526120 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.526131 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\nI0515 23:51:54.526140 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.526152 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.526170 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.526192 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.526209 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.526227 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\nI0515 23:51:54.529737 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.529761 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.529785 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.530601 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.530641 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.530662 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.530681 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.530723 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.530748 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.534324 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.534354 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.534392 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.534836 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.534860 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.534874 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.534904 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.534927 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.534953 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.542662 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.542686 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.542712 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.543404 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.543446 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.543463 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.543488 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.543505 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.543541 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.548385 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.548417 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.548434 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.549085 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.549104 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.549277 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.549302 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.549326 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.549353 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.553011 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.553032 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.553051 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.553580 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.553628 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.553648 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.553682 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.553700 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.553734 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\nI0515 23:51:54.558362 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.558385 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.558414 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.558979 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.558997 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.559011 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.559035 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.559042 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.559059 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.564217 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.564234 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.564252 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.564777 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.564796 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.564803 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.564817 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.564827 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.564833 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.568832 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.568862 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.568878 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.569537 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.569572 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.569587 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.569606 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.569615 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.569631 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.573045 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.573065 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.573080 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.573699 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.573731 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.573743 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.573762 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.573772 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.573782 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.576843 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.576854 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.576860 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.577681 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.577707 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.577733 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.577752 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.577762 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.577779 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.583886 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.583913 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.583938 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.584456 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.584495 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.584515 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.584540 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.584553 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.584566 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.590058 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.590075 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.590089 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.590597 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.590650 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.590671 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.590689 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.590699 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.590709 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.595512 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.595540 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.595554 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.596038 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.596055 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.596075 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.596083 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.596095 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.596114 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.602681 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.602708 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.602740 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.603427 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.603445 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.603456 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.603492 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.603517 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.603529 1441 log.go:172] (0xc0004aa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30092/\nI0515 23:51:54.610059 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.610076 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.610097 1441 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 23:51:54.610852 1441 log.go:172] (0xc000a79a20) Data frame received for 5\nI0515 23:51:54.610872 1441 log.go:172] (0xc0004aa0a0) (5) Data frame handling\nI0515 23:51:54.610898 1441 log.go:172] (0xc000a79a20) Data frame received for 3\nI0515 23:51:54.610941 1441 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 23:51:54.613033 1441 log.go:172] (0xc000a79a20) Data frame received for 1\nI0515 23:51:54.613068 1441 log.go:172] (0xc000a005a0) (1) Data frame handling\nI0515 23:51:54.613098 1441 log.go:172] (0xc000a005a0) (1) Data frame sent\nI0515 23:51:54.613338 1441 log.go:172] (0xc000a79a20) (0xc000a005a0) Stream removed, broadcasting: 1\nI0515 23:51:54.613378 1441 log.go:172] (0xc000a79a20) Go away received\nI0515 23:51:54.613880 1441 log.go:172] (0xc000a79a20) (0xc000a005a0) Stream removed, broadcasting: 1\nI0515 23:51:54.613904 1441 log.go:172] (0xc000a79a20) (0xc000630dc0) Stream removed, broadcasting: 3\nI0515 23:51:54.613923 1441 log.go:172] (0xc000a79a20) (0xc0004aa0a0) Stream removed, broadcasting: 5\n" May 15 23:51:54.620: INFO: stdout: "\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9\naffinity-nodeport-v7xt9" May 15 23:51:54.620: INFO: Received response from host: May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Received response from host: affinity-nodeport-v7xt9 May 15 23:51:54.620: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-2610, will wait for the garbage collector to delete the pods May 15 23:51:54.731: INFO: Deleting ReplicationController affinity-nodeport took: 30.493109ms May 15 23:51:55.031: INFO: Terminating ReplicationController affinity-nodeport pods took: 300.247019ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:52:05.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2610" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.376 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":53,"skipped":944,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:52:05.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 15 23:52:05.485: INFO: >>> kubeConfig: /root/.kube/config May 15 23:52:08.487: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:52:19.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-290" for this suite. • [SLOW TEST:13.864 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":54,"skipped":965,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:52:19.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 15 23:52:19.435: INFO: Waiting up to 5m0s for pod "downward-api-4cc54d7f-67c4-409c-b8bd-65f2d2725b9b" in namespace "downward-api-1836" to be "Succeeded or Failed" May 15 23:52:19.440: INFO: Pod "downward-api-4cc54d7f-67c4-409c-b8bd-65f2d2725b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.443208ms May 15 23:52:21.444: INFO: Pod "downward-api-4cc54d7f-67c4-409c-b8bd-65f2d2725b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00908614s May 15 23:52:23.447: INFO: Pod "downward-api-4cc54d7f-67c4-409c-b8bd-65f2d2725b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011979688s STEP: Saw pod success May 15 23:52:23.447: INFO: Pod "downward-api-4cc54d7f-67c4-409c-b8bd-65f2d2725b9b" satisfied condition "Succeeded or Failed" May 15 23:52:23.449: INFO: Trying to get logs from node latest-worker2 pod downward-api-4cc54d7f-67c4-409c-b8bd-65f2d2725b9b container dapi-container: STEP: delete the pod May 15 23:52:23.766: INFO: Waiting for pod downward-api-4cc54d7f-67c4-409c-b8bd-65f2d2725b9b to disappear May 15 23:52:23.806: INFO: Pod downward-api-4cc54d7f-67c4-409c-b8bd-65f2d2725b9b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:52:23.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1836" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":55,"skipped":967,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:52:23.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-5467 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5467 to expose endpoints map[] May 15 23:52:24.156: INFO: successfully validated that service endpoint-test2 in namespace services-5467 exposes endpoints map[] (52.769168ms elapsed) STEP: Creating pod pod1 in namespace services-5467 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5467 to expose endpoints map[pod1:[80]] May 15 23:52:28.398: INFO: successfully validated that service endpoint-test2 in namespace services-5467 exposes endpoints map[pod1:[80]] (4.217041092s elapsed) STEP: Creating pod pod2 in namespace services-5467 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5467 to expose endpoints map[pod1:[80] pod2:[80]] May 15 23:52:31.584: INFO: successfully validated that service endpoint-test2 in namespace services-5467 exposes endpoints map[pod1:[80] pod2:[80]] (3.180897712s elapsed) STEP: Deleting pod pod1 in namespace services-5467 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5467 to expose endpoints map[pod2:[80]] May 15 23:52:32.707: INFO: successfully validated that service endpoint-test2 in namespace services-5467 exposes endpoints map[pod2:[80]] (1.117126036s elapsed) STEP: Deleting pod pod2 in namespace services-5467 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5467 to expose endpoints map[] May 15 23:52:33.758: INFO: successfully validated that service endpoint-test2 in namespace services-5467 exposes endpoints map[] (1.046706095s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:52:34.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5467" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.296 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":56,"skipped":974,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:52:34.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 15 23:52:34.713: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5887 /api/v1/namespaces/watch-5887/configmaps/e2e-watch-test-resource-version 11c198bd-6f65-4778-af9b-b72909518e6e 5000628 0 2020-05-15 23:52:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-15 23:52:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 15 23:52:34.713: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5887 /api/v1/namespaces/watch-5887/configmaps/e2e-watch-test-resource-version 11c198bd-6f65-4778-af9b-b72909518e6e 5000629 0 2020-05-15 23:52:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-15 23:52:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:52:34.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5887" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":57,"skipped":976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:52:34.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-6583be86-3f7f-49cd-9cc9-abd433312c8d in namespace container-probe-395 May 15 23:52:38.857: INFO: Started pod liveness-6583be86-3f7f-49cd-9cc9-abd433312c8d in namespace container-probe-395 STEP: checking the pod's current state and verifying that restartCount is present May 15 23:52:38.859: INFO: Initial restart count of pod liveness-6583be86-3f7f-49cd-9cc9-abd433312c8d is 0 May 15 23:52:52.908: INFO: Restart count of pod container-probe-395/liveness-6583be86-3f7f-49cd-9cc9-abd433312c8d is now 1 (14.049236062s elapsed) May 15 23:53:12.950: INFO: Restart count of pod container-probe-395/liveness-6583be86-3f7f-49cd-9cc9-abd433312c8d is now 2 (34.091237004s elapsed) May 15 23:53:32.997: INFO: Restart count of pod container-probe-395/liveness-6583be86-3f7f-49cd-9cc9-abd433312c8d is now 3 (54.137396987s elapsed) May 15 23:53:53.099: INFO: Restart count of pod container-probe-395/liveness-6583be86-3f7f-49cd-9cc9-abd433312c8d is now 4 (1m14.239634759s elapsed) May 15 23:55:01.239: INFO: Restart count of pod container-probe-395/liveness-6583be86-3f7f-49cd-9cc9-abd433312c8d is now 5 (2m22.379733196s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:55:01.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-395" for this suite. • [SLOW TEST:146.563 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":58,"skipped":1001,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:55:01.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a8ba0d68-f511-4552-afcc-e4521fd407b9 STEP: Creating a pod to test consume secrets May 15 23:55:01.417: INFO: Waiting up to 5m0s for pod "pod-secrets-5c56db8c-8850-4dd9-9562-6496aaa1082b" in namespace "secrets-3096" to be "Succeeded or Failed" May 15 23:55:01.420: INFO: Pod "pod-secrets-5c56db8c-8850-4dd9-9562-6496aaa1082b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.726075ms May 15 23:55:03.425: INFO: Pod "pod-secrets-5c56db8c-8850-4dd9-9562-6496aaa1082b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007398435s May 15 23:55:05.429: INFO: Pod "pod-secrets-5c56db8c-8850-4dd9-9562-6496aaa1082b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011065628s May 15 23:55:07.433: INFO: Pod "pod-secrets-5c56db8c-8850-4dd9-9562-6496aaa1082b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015030401s STEP: Saw pod success May 15 23:55:07.433: INFO: Pod "pod-secrets-5c56db8c-8850-4dd9-9562-6496aaa1082b" satisfied condition "Succeeded or Failed" May 15 23:55:07.435: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5c56db8c-8850-4dd9-9562-6496aaa1082b container secret-volume-test: STEP: delete the pod May 15 23:55:07.499: INFO: Waiting for pod pod-secrets-5c56db8c-8850-4dd9-9562-6496aaa1082b to disappear May 15 23:55:07.537: INFO: Pod pod-secrets-5c56db8c-8850-4dd9-9562-6496aaa1082b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:55:07.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3096" for this suite. • [SLOW TEST:6.252 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":59,"skipped":1023,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:55:07.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:55:41.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4015" for this suite. • [SLOW TEST:34.195 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":60,"skipped":1026,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:55:41.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0515 23:55:42.935249 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 23:55:42.935: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:55:42.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1161" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":61,"skipped":1027,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:55:42.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:57:43.260: INFO: Deleting pod "var-expansion-ca95292c-e85e-4bb9-8e29-6139de708cca" in namespace "var-expansion-4433" May 15 23:57:43.265: INFO: Wait up to 5m0s for pod "var-expansion-ca95292c-e85e-4bb9-8e29-6139de708cca" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:57:47.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4433" for this suite. • [SLOW TEST:124.392 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":62,"skipped":1029,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:57:47.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 23:57:47.443: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:57:51.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4713" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":1103,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:57:51.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:58:07.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6093" for this suite. STEP: Destroying namespace "nsdeletetest-3039" for this suite. May 15 23:58:07.827: INFO: Namespace nsdeletetest-3039 was already deleted STEP: Destroying namespace "nsdeletetest-2028" for this suite. • [SLOW TEST:16.360 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":64,"skipped":1110,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:58:07.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 23:58:08.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51bbf5fb-c066-4ff3-8ee5-52e8df378ea0" in namespace "projected-3799" to be "Succeeded or Failed" May 15 23:58:08.043: INFO: Pod "downwardapi-volume-51bbf5fb-c066-4ff3-8ee5-52e8df378ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.543143ms May 15 23:58:10.047: INFO: Pod "downwardapi-volume-51bbf5fb-c066-4ff3-8ee5-52e8df378ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022836764s May 15 23:58:12.060: INFO: Pod "downwardapi-volume-51bbf5fb-c066-4ff3-8ee5-52e8df378ea0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035214704s STEP: Saw pod success May 15 23:58:12.060: INFO: Pod "downwardapi-volume-51bbf5fb-c066-4ff3-8ee5-52e8df378ea0" satisfied condition "Succeeded or Failed" May 15 23:58:12.062: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-51bbf5fb-c066-4ff3-8ee5-52e8df378ea0 container client-container: STEP: delete the pod May 15 23:58:12.086: INFO: Waiting for pod downwardapi-volume-51bbf5fb-c066-4ff3-8ee5-52e8df378ea0 to disappear May 15 23:58:12.089: INFO: Pod downwardapi-volume-51bbf5fb-c066-4ff3-8ee5-52e8df378ea0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:58:12.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3799" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:58:12.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-7992 STEP: Creating a pod to test atomic-volume-subpath May 15 23:58:12.665: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7992" in namespace "subpath-5366" to be "Succeeded or Failed" May 15 23:58:12.668: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Pending", Reason="", readiness=false. Elapsed: 2.556848ms May 15 23:58:14.673: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007829607s May 15 23:58:16.677: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012236605s May 15 23:58:18.682: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 6.016786705s May 15 23:58:20.687: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 8.021376513s May 15 23:58:22.691: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 10.025861122s May 15 23:58:24.695: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 12.029618791s May 15 23:58:26.698: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 14.033011689s May 15 23:58:28.718: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 16.052899059s May 15 23:58:30.722: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 18.057235078s May 15 23:58:32.727: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 20.061903977s May 15 23:58:34.730: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 22.064996544s May 15 23:58:36.734: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Running", Reason="", readiness=true. Elapsed: 24.068673583s May 15 23:58:38.738: INFO: Pod "pod-subpath-test-downwardapi-7992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.072548477s STEP: Saw pod success May 15 23:58:38.738: INFO: Pod "pod-subpath-test-downwardapi-7992" satisfied condition "Succeeded or Failed" May 15 23:58:38.741: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-7992 container test-container-subpath-downwardapi-7992: STEP: delete the pod May 15 23:58:38.773: INFO: Waiting for pod pod-subpath-test-downwardapi-7992 to disappear May 15 23:58:38.784: INFO: Pod pod-subpath-test-downwardapi-7992 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7992 May 15 23:58:38.784: INFO: Deleting pod "pod-subpath-test-downwardapi-7992" in namespace "subpath-5366" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:58:38.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5366" for this suite. • [SLOW TEST:26.696 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":66,"skipped":1167,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:58:38.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1208 STEP: creating service affinity-clusterip-transition in namespace services-1208 STEP: creating replication controller affinity-clusterip-transition in namespace services-1208 I0515 23:58:38.924669 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1208, replica count: 3 I0515 23:58:41.974970 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 23:58:44.975164 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 23:58:44.981: INFO: Creating new exec pod May 15 23:58:49.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1208 execpod-affinityjw9v2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 15 23:58:53.254: INFO: stderr: "I0515 23:58:53.169382 1461 log.go:172] (0xc00003a6e0) (0xc000630f00) Create stream\nI0515 23:58:53.169414 1461 log.go:172] (0xc00003a6e0) (0xc000630f00) Stream added, broadcasting: 1\nI0515 23:58:53.171181 1461 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0515 23:58:53.171227 1461 log.go:172] (0xc00003a6e0) (0xc0006314a0) Create stream\nI0515 23:58:53.171243 1461 log.go:172] (0xc00003a6e0) (0xc0006314a0) Stream added, broadcasting: 3\nI0515 23:58:53.172014 1461 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0515 23:58:53.172031 1461 log.go:172] (0xc00003a6e0) (0xc000631b80) Create stream\nI0515 23:58:53.172038 1461 log.go:172] (0xc00003a6e0) (0xc000631b80) Stream added, broadcasting: 5\nI0515 23:58:53.172640 1461 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0515 23:58:53.246000 1461 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0515 23:58:53.246024 1461 log.go:172] (0xc000631b80) (5) Data frame handling\nI0515 23:58:53.246042 1461 log.go:172] (0xc000631b80) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0515 23:58:53.246590 1461 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0515 23:58:53.246616 1461 log.go:172] (0xc000631b80) (5) Data frame handling\nI0515 23:58:53.246646 1461 log.go:172] (0xc000631b80) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0515 23:58:53.247089 1461 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0515 23:58:53.247114 1461 log.go:172] (0xc0006314a0) (3) Data frame handling\nI0515 23:58:53.247135 1461 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0515 23:58:53.247156 1461 log.go:172] (0xc000631b80) (5) Data frame handling\nI0515 23:58:53.248843 1461 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0515 23:58:53.248919 1461 log.go:172] (0xc000630f00) (1) Data frame handling\nI0515 23:58:53.248947 1461 log.go:172] (0xc000630f00) (1) Data frame sent\nI0515 23:58:53.248964 1461 log.go:172] (0xc00003a6e0) (0xc000630f00) Stream removed, broadcasting: 1\nI0515 23:58:53.249363 1461 log.go:172] (0xc00003a6e0) (0xc000630f00) Stream removed, broadcasting: 1\nI0515 23:58:53.249382 1461 log.go:172] (0xc00003a6e0) (0xc0006314a0) Stream removed, broadcasting: 3\nI0515 23:58:53.249392 1461 log.go:172] (0xc00003a6e0) (0xc000631b80) Stream removed, broadcasting: 5\n" May 15 23:58:53.254: INFO: stdout: "" May 15 23:58:53.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1208 execpod-affinityjw9v2 -- /bin/sh -x -c nc -zv -t -w 2 10.106.79.69 80' May 15 23:58:53.457: INFO: stderr: "I0515 23:58:53.381961 1491 log.go:172] (0xc000ab7290) (0xc000c2c460) Create stream\nI0515 23:58:53.382010 1491 log.go:172] (0xc000ab7290) (0xc000c2c460) Stream added, broadcasting: 1\nI0515 23:58:53.385620 1491 log.go:172] (0xc000ab7290) Reply frame received for 1\nI0515 23:58:53.385670 1491 log.go:172] (0xc000ab7290) (0xc000550e60) Create stream\nI0515 23:58:53.385686 1491 log.go:172] (0xc000ab7290) (0xc000550e60) Stream added, broadcasting: 3\nI0515 23:58:53.386620 1491 log.go:172] (0xc000ab7290) Reply frame received for 3\nI0515 23:58:53.386665 1491 log.go:172] (0xc000ab7290) (0xc00035d720) Create stream\nI0515 23:58:53.386683 1491 log.go:172] (0xc000ab7290) (0xc00035d720) Stream added, broadcasting: 5\nI0515 23:58:53.387415 1491 log.go:172] (0xc000ab7290) Reply frame received for 5\nI0515 23:58:53.451743 1491 log.go:172] (0xc000ab7290) Data frame received for 5\nI0515 23:58:53.451776 1491 log.go:172] (0xc00035d720) (5) Data frame handling\nI0515 23:58:53.451786 1491 log.go:172] (0xc00035d720) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.79.69 80\nConnection to 10.106.79.69 80 port [tcp/http] succeeded!\nI0515 23:58:53.451819 1491 log.go:172] (0xc000ab7290) Data frame received for 3\nI0515 23:58:53.451855 1491 log.go:172] (0xc000550e60) (3) Data frame handling\nI0515 23:58:53.451977 1491 log.go:172] (0xc000ab7290) Data frame received for 5\nI0515 23:58:53.452021 1491 log.go:172] (0xc00035d720) (5) Data frame handling\nI0515 23:58:53.453526 1491 log.go:172] (0xc000ab7290) Data frame received for 1\nI0515 23:58:53.453543 1491 log.go:172] (0xc000c2c460) (1) Data frame handling\nI0515 23:58:53.453555 1491 log.go:172] (0xc000c2c460) (1) Data frame sent\nI0515 23:58:53.453563 1491 log.go:172] (0xc000ab7290) (0xc000c2c460) Stream removed, broadcasting: 1\nI0515 23:58:53.453658 1491 log.go:172] (0xc000ab7290) Go away received\nI0515 23:58:53.453776 1491 log.go:172] (0xc000ab7290) (0xc000c2c460) Stream removed, broadcasting: 1\nI0515 23:58:53.453786 1491 log.go:172] (0xc000ab7290) (0xc000550e60) Stream removed, broadcasting: 3\nI0515 23:58:53.453791 1491 log.go:172] (0xc000ab7290) (0xc00035d720) Stream removed, broadcasting: 5\n" May 15 23:58:53.457: INFO: stdout: "" May 15 23:58:53.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1208 execpod-affinityjw9v2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.79.69:80/ ; done' May 15 23:58:53.771: INFO: stderr: "I0515 23:58:53.611346 1512 log.go:172] (0xc0009d00b0) (0xc0008b4820) Create stream\nI0515 23:58:53.611420 1512 log.go:172] (0xc0009d00b0) (0xc0008b4820) Stream added, broadcasting: 1\nI0515 23:58:53.619310 1512 log.go:172] (0xc0009d00b0) Reply frame received for 1\nI0515 23:58:53.619358 1512 log.go:172] (0xc0009d00b0) (0xc0008b4d20) Create stream\nI0515 23:58:53.619370 1512 log.go:172] (0xc0009d00b0) (0xc0008b4d20) Stream added, broadcasting: 3\nI0515 23:58:53.620647 1512 log.go:172] (0xc0009d00b0) Reply frame received for 3\nI0515 23:58:53.620682 1512 log.go:172] (0xc0009d00b0) (0xc000696000) Create stream\nI0515 23:58:53.620700 1512 log.go:172] (0xc0009d00b0) (0xc000696000) Stream added, broadcasting: 5\nI0515 23:58:53.621489 1512 log.go:172] (0xc0009d00b0) Reply frame received for 5\nI0515 23:58:53.687593 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.687626 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.687638 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.687659 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.687669 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.687683 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.692377 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.692405 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.692427 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.692858 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.692878 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.692905 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.692956 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.692976 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.692995 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.697608 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.697637 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.697657 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.698033 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.698055 1512 log.go:172] (0xc000696000) (5) Data frame handling\n+ echo\nI0515 23:58:53.698070 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.698089 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.698109 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.698125 1512 log.go:172] (0xc000696000) (5) Data frame sent\nI0515 23:58:53.698146 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.698168 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.698185 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.702113 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.702143 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.702167 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.702500 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.702526 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.702537 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.702552 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.702561 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.702570 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.706973 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.707007 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.707046 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.707314 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.707330 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.707339 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\nI0515 23:58:53.707384 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.707404 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.707428 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.707575 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.707645 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.707681 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.710783 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.710809 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.710825 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.711396 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.711424 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.711460 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.711475 1512 log.go:172] (0xc000696000) (5) Data frame sent\nI0515 23:58:53.711484 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.711496 1512 log.go:172] (0xc000696000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.711515 1512 log.go:172] (0xc000696000) (5) Data frame sent\nI0515 23:58:53.711535 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.711550 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.715369 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.715386 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.715394 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.716047 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.716060 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.716076 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.716098 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.716115 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.716129 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.720712 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.720721 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.720727 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.721369 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.721401 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.721412 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.721429 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.721438 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.721447 1512 log.go:172] (0xc000696000) (5) Data frame sent\nI0515 23:58:53.721457 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\n+ echo\n+ curl -q -sI0515 23:58:53.721465 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.721515 1512 log.go:172] (0xc000696000) (5) Data frame sent\n --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.726006 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.726025 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.726047 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.726557 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.726606 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.726644 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.726668 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.726705 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.726748 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.731320 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.731346 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.731370 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.732152 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.732171 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.732186 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.732199 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.732207 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.732232 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.736677 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.736701 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.736719 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.737339 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.737375 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.737390 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.737414 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.737428 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.737442 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.740148 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.740161 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.740175 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.740808 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.740832 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.740843 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.740856 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.740864 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.740871 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.745654 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.745667 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.745675 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.745898 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.745914 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.745927 1512 log.go:172] (0xc000696000) (5) Data frame sent\nI0515 23:58:53.745936 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.745945 1512 log.go:172] (0xc000696000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.745960 1512 log.go:172] (0xc000696000) (5) Data frame sent\nI0515 23:58:53.746041 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.746064 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.746095 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.749969 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.750002 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.750045 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.750375 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.750389 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.750402 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.750425 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.750436 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.750461 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.754472 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.754492 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.754507 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.754975 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.754993 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.755011 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -sI0515 23:58:53.755022 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.755052 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.755063 1512 log.go:172] (0xc000696000) (5) Data frame sent\n --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.755093 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.755110 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.755120 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.759224 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.759239 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.759249 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.759592 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.759605 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.759618 1512 log.go:172] (0xc000696000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:53.759696 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.759716 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.759735 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.763771 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.763802 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.763834 1512 log.go:172] (0xc0008b4d20) (3) Data frame sent\nI0515 23:58:53.764341 1512 log.go:172] (0xc0009d00b0) Data frame received for 5\nI0515 23:58:53.764381 1512 log.go:172] (0xc000696000) (5) Data frame handling\nI0515 23:58:53.764421 1512 log.go:172] (0xc0009d00b0) Data frame received for 3\nI0515 23:58:53.764444 1512 log.go:172] (0xc0008b4d20) (3) Data frame handling\nI0515 23:58:53.766082 1512 log.go:172] (0xc0009d00b0) Data frame received for 1\nI0515 23:58:53.766104 1512 log.go:172] (0xc0008b4820) (1) Data frame handling\nI0515 23:58:53.766115 1512 log.go:172] (0xc0008b4820) (1) Data frame sent\nI0515 23:58:53.766128 1512 log.go:172] (0xc0009d00b0) (0xc0008b4820) Stream removed, broadcasting: 1\nI0515 23:58:53.766140 1512 log.go:172] (0xc0009d00b0) Go away received\nI0515 23:58:53.766546 1512 log.go:172] (0xc0009d00b0) (0xc0008b4820) Stream removed, broadcasting: 1\nI0515 23:58:53.766578 1512 log.go:172] (0xc0009d00b0) (0xc0008b4d20) Stream removed, broadcasting: 3\nI0515 23:58:53.766600 1512 log.go:172] (0xc0009d00b0) (0xc000696000) Stream removed, broadcasting: 5\n" May 15 23:58:53.772: INFO: stdout: "\naffinity-clusterip-transition-x9k9p\naffinity-clusterip-transition-x9k9p\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-x9k9p\naffinity-clusterip-transition-cqsjc\naffinity-clusterip-transition-x9k9p\naffinity-clusterip-transition-x9k9p\naffinity-clusterip-transition-cqsjc\naffinity-clusterip-transition-cqsjc\naffinity-clusterip-transition-cqsjc\naffinity-clusterip-transition-cqsjc\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-cqsjc\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-cqsjc" May 15 23:58:53.772: INFO: Received response from host: May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-x9k9p May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-x9k9p May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-x9k9p May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-cqsjc May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-x9k9p May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-x9k9p May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-cqsjc May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-cqsjc May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-cqsjc May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-cqsjc May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-cqsjc May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:53.772: INFO: Received response from host: affinity-clusterip-transition-cqsjc May 15 23:58:53.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1208 execpod-affinityjw9v2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.79.69:80/ ; done' May 15 23:58:54.106: INFO: stderr: "I0515 23:58:53.971519 1531 log.go:172] (0xc00056efd0) (0xc000a4c820) Create stream\nI0515 23:58:53.971584 1531 log.go:172] (0xc00056efd0) (0xc000a4c820) Stream added, broadcasting: 1\nI0515 23:58:53.975149 1531 log.go:172] (0xc00056efd0) Reply frame received for 1\nI0515 23:58:53.975177 1531 log.go:172] (0xc00056efd0) (0xc00051e140) Create stream\nI0515 23:58:53.975184 1531 log.go:172] (0xc00056efd0) (0xc00051e140) Stream added, broadcasting: 3\nI0515 23:58:53.976122 1531 log.go:172] (0xc00056efd0) Reply frame received for 3\nI0515 23:58:53.976148 1531 log.go:172] (0xc00056efd0) (0xc0004bac80) Create stream\nI0515 23:58:53.976155 1531 log.go:172] (0xc00056efd0) (0xc0004bac80) Stream added, broadcasting: 5\nI0515 23:58:53.977089 1531 log.go:172] (0xc00056efd0) Reply frame received for 5\nI0515 23:58:54.028969 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.029001 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.029013 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.029036 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.029047 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.029058 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.033827 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.033849 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.033901 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.034279 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.034288 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.034293 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.034319 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.034355 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.034390 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.038651 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.038691 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.038729 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.039026 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.039042 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\n+ echo\nI0515 23:58:54.039061 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.039131 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.039159 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.039179 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\nI0515 23:58:54.039207 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.039230 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.039254 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.044101 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.044119 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.044128 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.044417 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.044444 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.044458 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.044493 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.044523 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.044547 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.048552 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.048561 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.048567 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.048864 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.048874 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.048880 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.049022 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.049036 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.049045 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.052579 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.052612 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.052642 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.053037 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.053065 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.053086 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.053106 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.053261 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.053274 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.056984 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.057011 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.057028 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.057451 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.057466 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.057477 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.057575 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.057589 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.057603 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.061092 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.061233 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.061272 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.061675 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.061696 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.061718 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.061732 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.061747 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.061758 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\nI0515 23:58:54.061770 1531 log.go:172] (0xc00056efd0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0515 23:58:54.061780 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.061789 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n http://10.106.79.69:80/\nI0515 23:58:54.064756 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.064770 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.064784 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.065769 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.065778 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.065782 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.065887 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.065909 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.065927 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.068776 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.068788 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.068796 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.069245 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.069292 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.069308 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.069323 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.069333 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.069343 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.072241 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.072257 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.072265 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.072459 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.072474 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.072480 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.072494 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.072519 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.072529 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.076665 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.076691 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.076710 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.077015 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.077033 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.077043 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.077054 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.077072 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.077090 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.079903 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.079913 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.079917 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.080307 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.080322 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.080334 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.080389 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.080406 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.080422 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.084413 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.084421 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.084433 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.085057 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.085077 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.085091 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\nI0515 23:58:54.085105 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.085222 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.085237 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.085251 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.085268 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.085287 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\nI0515 23:58:54.089381 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.089402 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.089420 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.089779 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.089810 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.089854 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.089872 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.089896 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.089947 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.79.69:80/\nI0515 23:58:54.093829 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.093852 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.093871 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.094664 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.094697 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.094722 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0515 23:58:54.094750 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.094783 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.094796 1531 log.go:172] (0xc0004bac80) (5) Data frame sent\n http://10.106.79.69:80/\nI0515 23:58:54.094808 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.094820 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.094854 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.098477 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.098507 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.098532 1531 log.go:172] (0xc00051e140) (3) Data frame sent\nI0515 23:58:54.099050 1531 log.go:172] (0xc00056efd0) Data frame received for 3\nI0515 23:58:54.099082 1531 log.go:172] (0xc00051e140) (3) Data frame handling\nI0515 23:58:54.099118 1531 log.go:172] (0xc00056efd0) Data frame received for 5\nI0515 23:58:54.099145 1531 log.go:172] (0xc0004bac80) (5) Data frame handling\nI0515 23:58:54.100535 1531 log.go:172] (0xc00056efd0) Data frame received for 1\nI0515 23:58:54.100548 1531 log.go:172] (0xc000a4c820) (1) Data frame handling\nI0515 23:58:54.100554 1531 log.go:172] (0xc000a4c820) (1) Data frame sent\nI0515 23:58:54.100563 1531 log.go:172] (0xc00056efd0) (0xc000a4c820) Stream removed, broadcasting: 1\nI0515 23:58:54.100588 1531 log.go:172] (0xc00056efd0) Go away received\nI0515 23:58:54.100772 1531 log.go:172] (0xc00056efd0) (0xc000a4c820) Stream removed, broadcasting: 1\nI0515 23:58:54.100783 1531 log.go:172] (0xc00056efd0) (0xc00051e140) Stream removed, broadcasting: 3\nI0515 23:58:54.100787 1531 log.go:172] (0xc00056efd0) (0xc0004bac80) Stream removed, broadcasting: 5\n" May 15 23:58:54.106: INFO: stdout: "\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9\naffinity-clusterip-transition-7gmq9" May 15 23:58:54.106: INFO: Received response from host: May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Received response from host: affinity-clusterip-transition-7gmq9 May 15 23:58:54.106: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-1208, will wait for the garbage collector to delete the pods May 15 23:58:54.200: INFO: Deleting ReplicationController affinity-clusterip-transition took: 3.821441ms May 15 23:58:54.700: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.202148ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:59:05.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1208" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.597 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":67,"skipped":1172,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:59:05.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:59:16.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9721" for this suite. • [SLOW TEST:11.181 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":68,"skipped":1188,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:59:16.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-66.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-66.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-66.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-66.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-66.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-66.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-66.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-66.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-66.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-66.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-66.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 174.237.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.237.174_udp@PTR;check="$$(dig +tcp +noall +answer +search 174.237.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.237.174_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-66.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-66.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-66.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-66.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-66.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-66.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-66.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-66.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-66.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-66.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-66.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 174.237.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.237.174_udp@PTR;check="$$(dig +tcp +noall +answer +search 174.237.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.237.174_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 23:59:24.882: INFO: Unable to read wheezy_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:24.885: INFO: Unable to read wheezy_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:24.888: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:24.890: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:24.915: INFO: Unable to read jessie_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:24.918: INFO: Unable to read jessie_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:24.920: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:24.923: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:24.937: INFO: Lookups using dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e failed for: [wheezy_udp@dns-test-service.dns-66.svc.cluster.local wheezy_tcp@dns-test-service.dns-66.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_udp@dns-test-service.dns-66.svc.cluster.local jessie_tcp@dns-test-service.dns-66.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local] May 15 23:59:29.943: INFO: Unable to read wheezy_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:29.947: INFO: Unable to read wheezy_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:29.950: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:29.953: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:29.975: INFO: Unable to read jessie_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:29.979: INFO: Unable to read jessie_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:29.982: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:29.985: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:30.006: INFO: Lookups using dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e failed for: [wheezy_udp@dns-test-service.dns-66.svc.cluster.local wheezy_tcp@dns-test-service.dns-66.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_udp@dns-test-service.dns-66.svc.cluster.local jessie_tcp@dns-test-service.dns-66.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local] May 15 23:59:34.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:34.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:34.949: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:34.952: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:34.972: INFO: Unable to read jessie_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:34.975: INFO: Unable to read jessie_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:34.978: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:34.981: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:35.001: INFO: Lookups using dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e failed for: [wheezy_udp@dns-test-service.dns-66.svc.cluster.local wheezy_tcp@dns-test-service.dns-66.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_udp@dns-test-service.dns-66.svc.cluster.local jessie_tcp@dns-test-service.dns-66.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local] May 15 23:59:39.943: INFO: Unable to read wheezy_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:39.947: INFO: Unable to read wheezy_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:39.950: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:39.953: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:39.972: INFO: Unable to read jessie_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:39.975: INFO: Unable to read jessie_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:39.978: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:39.981: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:39.998: INFO: Lookups using dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e failed for: [wheezy_udp@dns-test-service.dns-66.svc.cluster.local wheezy_tcp@dns-test-service.dns-66.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_udp@dns-test-service.dns-66.svc.cluster.local jessie_tcp@dns-test-service.dns-66.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local] May 15 23:59:44.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:44.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:44.949: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:44.951: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:44.969: INFO: Unable to read jessie_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:44.971: INFO: Unable to read jessie_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:44.973: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:44.975: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:44.988: INFO: Lookups using dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e failed for: [wheezy_udp@dns-test-service.dns-66.svc.cluster.local wheezy_tcp@dns-test-service.dns-66.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_udp@dns-test-service.dns-66.svc.cluster.local jessie_tcp@dns-test-service.dns-66.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local] May 15 23:59:49.944: INFO: Unable to read wheezy_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:49.947: INFO: Unable to read wheezy_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:49.949: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:49.952: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:49.972: INFO: Unable to read jessie_udp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:49.975: INFO: Unable to read jessie_tcp@dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:49.978: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:49.981: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local from pod dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e: the server could not find the requested resource (get pods dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e) May 15 23:59:50.001: INFO: Lookups using dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e failed for: [wheezy_udp@dns-test-service.dns-66.svc.cluster.local wheezy_tcp@dns-test-service.dns-66.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_udp@dns-test-service.dns-66.svc.cluster.local jessie_tcp@dns-test-service.dns-66.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-66.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-66.svc.cluster.local] May 15 23:59:55.000: INFO: DNS probes using dns-66/dns-test-c73efbf4-4059-4b8e-b576-9362dd31726e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 23:59:56.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-66" for this suite. • [SLOW TEST:39.479 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":69,"skipped":1189,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 23:59:56.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 15 23:59:56.093: INFO: Waiting up to 5m0s for pod "pod-89329a0f-a849-4585-8cbe-384983e6fc5a" in namespace "emptydir-6370" to be "Succeeded or Failed" May 15 23:59:56.138: INFO: Pod "pod-89329a0f-a849-4585-8cbe-384983e6fc5a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.928648ms May 15 23:59:58.204: INFO: Pod "pod-89329a0f-a849-4585-8cbe-384983e6fc5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110826901s May 16 00:00:00.228: INFO: Pod "pod-89329a0f-a849-4585-8cbe-384983e6fc5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134576449s May 16 00:00:02.232: INFO: Pod "pod-89329a0f-a849-4585-8cbe-384983e6fc5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13868175s STEP: Saw pod success May 16 00:00:02.232: INFO: Pod "pod-89329a0f-a849-4585-8cbe-384983e6fc5a" satisfied condition "Succeeded or Failed" May 16 00:00:02.234: INFO: Trying to get logs from node latest-worker pod pod-89329a0f-a849-4585-8cbe-384983e6fc5a container test-container: STEP: delete the pod May 16 00:00:02.275: INFO: Waiting for pod pod-89329a0f-a849-4585-8cbe-384983e6fc5a to disappear May 16 00:00:02.296: INFO: Pod pod-89329a0f-a849-4585-8cbe-384983e6fc5a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:00:02.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6370" for this suite. • [SLOW TEST:6.253 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":70,"skipped":1203,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:00:02.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-54bcef0f-3881-468e-a03d-c9d64b504802 STEP: Creating a pod to test consume configMaps May 16 00:00:02.432: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72ad0a23-c49c-4045-8772-cd05878f7112" in namespace "projected-2901" to be "Succeeded or Failed" May 16 00:00:02.442: INFO: Pod "pod-projected-configmaps-72ad0a23-c49c-4045-8772-cd05878f7112": Phase="Pending", Reason="", readiness=false. Elapsed: 10.73892ms May 16 00:00:04.446: INFO: Pod "pod-projected-configmaps-72ad0a23-c49c-4045-8772-cd05878f7112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014216011s May 16 00:00:06.450: INFO: Pod "pod-projected-configmaps-72ad0a23-c49c-4045-8772-cd05878f7112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018028072s STEP: Saw pod success May 16 00:00:06.450: INFO: Pod "pod-projected-configmaps-72ad0a23-c49c-4045-8772-cd05878f7112" satisfied condition "Succeeded or Failed" May 16 00:00:06.452: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-72ad0a23-c49c-4045-8772-cd05878f7112 container projected-configmap-volume-test: STEP: delete the pod May 16 00:00:06.635: INFO: Waiting for pod pod-projected-configmaps-72ad0a23-c49c-4045-8772-cd05878f7112 to disappear May 16 00:00:06.749: INFO: Pod pod-projected-configmaps-72ad0a23-c49c-4045-8772-cd05878f7112 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:00:06.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2901" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":71,"skipped":1208,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:00:06.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-8edb714e-6715-4b70-a754-319bd74ccded STEP: Creating a pod to test consume configMaps May 16 00:00:06.878: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cce28b02-b3df-48b7-be29-916e94621300" in namespace "projected-4666" to be "Succeeded or Failed" May 16 00:00:06.928: INFO: Pod "pod-projected-configmaps-cce28b02-b3df-48b7-be29-916e94621300": Phase="Pending", Reason="", readiness=false. Elapsed: 50.150953ms May 16 00:00:09.030: INFO: Pod "pod-projected-configmaps-cce28b02-b3df-48b7-be29-916e94621300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152786191s May 16 00:00:11.034: INFO: Pod "pod-projected-configmaps-cce28b02-b3df-48b7-be29-916e94621300": Phase="Running", Reason="", readiness=true. Elapsed: 4.156557613s May 16 00:00:13.151: INFO: Pod "pod-projected-configmaps-cce28b02-b3df-48b7-be29-916e94621300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.273297869s STEP: Saw pod success May 16 00:00:13.151: INFO: Pod "pod-projected-configmaps-cce28b02-b3df-48b7-be29-916e94621300" satisfied condition "Succeeded or Failed" May 16 00:00:13.154: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-cce28b02-b3df-48b7-be29-916e94621300 container projected-configmap-volume-test: STEP: delete the pod May 16 00:00:13.315: INFO: Waiting for pod pod-projected-configmaps-cce28b02-b3df-48b7-be29-916e94621300 to disappear May 16 00:00:13.345: INFO: Pod pod-projected-configmaps-cce28b02-b3df-48b7-be29-916e94621300 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:00:13.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4666" for this suite. • [SLOW TEST:6.600 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":72,"skipped":1213,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:00:13.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5545 STEP: creating service affinity-nodeport-transition in namespace services-5545 STEP: creating replication controller affinity-nodeport-transition in namespace services-5545 I0516 00:00:13.688742 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5545, replica count: 3 I0516 00:00:16.739099 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 00:00:19.739372 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 00:00:19.756: INFO: Creating new exec pod May 16 00:00:24.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5545 execpod-affinitylq6ws -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 16 00:00:25.028: INFO: stderr: "I0516 00:00:24.904790 1551 log.go:172] (0xc00061c000) (0xc000644d20) Create stream\nI0516 00:00:24.904838 1551 log.go:172] (0xc00061c000) (0xc000644d20) Stream added, broadcasting: 1\nI0516 00:00:24.907271 1551 log.go:172] (0xc00061c000) Reply frame received for 1\nI0516 00:00:24.907294 1551 log.go:172] (0xc00061c000) (0xc0006285a0) Create stream\nI0516 00:00:24.907301 1551 log.go:172] (0xc00061c000) (0xc0006285a0) Stream added, broadcasting: 3\nI0516 00:00:24.908240 1551 log.go:172] (0xc00061c000) Reply frame received for 3\nI0516 00:00:24.908288 1551 log.go:172] (0xc00061c000) (0xc000645cc0) Create stream\nI0516 00:00:24.908307 1551 log.go:172] (0xc00061c000) (0xc000645cc0) Stream added, broadcasting: 5\nI0516 00:00:24.909098 1551 log.go:172] (0xc00061c000) Reply frame received for 5\nI0516 00:00:24.999554 1551 log.go:172] (0xc00061c000) Data frame received for 5\nI0516 00:00:24.999576 1551 log.go:172] (0xc000645cc0) (5) Data frame handling\nI0516 00:00:24.999596 1551 log.go:172] (0xc000645cc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0516 00:00:25.021824 1551 log.go:172] (0xc00061c000) Data frame received for 5\nI0516 00:00:25.021854 1551 log.go:172] (0xc00061c000) Data frame received for 3\nI0516 00:00:25.021904 1551 log.go:172] (0xc0006285a0) (3) Data frame handling\nI0516 00:00:25.021935 1551 log.go:172] (0xc000645cc0) (5) Data frame handling\nI0516 00:00:25.021963 1551 log.go:172] (0xc000645cc0) (5) Data frame sent\nI0516 00:00:25.021975 1551 log.go:172] (0xc00061c000) Data frame received for 5\nI0516 00:00:25.021983 1551 log.go:172] (0xc000645cc0) (5) Data frame handling\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0516 00:00:25.023177 1551 log.go:172] (0xc00061c000) Data frame received for 1\nI0516 00:00:25.023195 1551 log.go:172] (0xc000644d20) (1) Data frame handling\nI0516 00:00:25.023206 1551 log.go:172] (0xc000644d20) (1) Data frame sent\nI0516 00:00:25.023418 1551 log.go:172] (0xc00061c000) (0xc000644d20) Stream removed, broadcasting: 1\nI0516 00:00:25.023471 1551 log.go:172] (0xc00061c000) Go away received\nI0516 00:00:25.023858 1551 log.go:172] (0xc00061c000) (0xc000644d20) Stream removed, broadcasting: 1\nI0516 00:00:25.023893 1551 log.go:172] (0xc00061c000) (0xc0006285a0) Stream removed, broadcasting: 3\nI0516 00:00:25.023916 1551 log.go:172] (0xc00061c000) (0xc000645cc0) Stream removed, broadcasting: 5\n" May 16 00:00:25.028: INFO: stdout: "" May 16 00:00:25.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5545 execpod-affinitylq6ws -- /bin/sh -x -c nc -zv -t -w 2 10.109.239.75 80' May 16 00:00:25.202: INFO: stderr: "I0516 00:00:25.135432 1572 log.go:172] (0xc00068a8f0) (0xc000579d60) Create stream\nI0516 00:00:25.135475 1572 log.go:172] (0xc00068a8f0) (0xc000579d60) Stream added, broadcasting: 1\nI0516 00:00:25.137645 1572 log.go:172] (0xc00068a8f0) Reply frame received for 1\nI0516 00:00:25.137683 1572 log.go:172] (0xc00068a8f0) (0xc0005323c0) Create stream\nI0516 00:00:25.137698 1572 log.go:172] (0xc00068a8f0) (0xc0005323c0) Stream added, broadcasting: 3\nI0516 00:00:25.138638 1572 log.go:172] (0xc00068a8f0) Reply frame received for 3\nI0516 00:00:25.138677 1572 log.go:172] (0xc00068a8f0) (0xc0004a4f00) Create stream\nI0516 00:00:25.138696 1572 log.go:172] (0xc00068a8f0) (0xc0004a4f00) Stream added, broadcasting: 5\nI0516 00:00:25.139382 1572 log.go:172] (0xc00068a8f0) Reply frame received for 5\nI0516 00:00:25.195461 1572 log.go:172] (0xc00068a8f0) Data frame received for 5\nI0516 00:00:25.195499 1572 log.go:172] (0xc0004a4f00) (5) Data frame handling\nI0516 00:00:25.195522 1572 log.go:172] (0xc0004a4f00) (5) Data frame sent\nI0516 00:00:25.195538 1572 log.go:172] (0xc00068a8f0) Data frame received for 5\nI0516 00:00:25.195553 1572 log.go:172] (0xc0004a4f00) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.239.75 80\nConnection to 10.109.239.75 80 port [tcp/http] succeeded!\nI0516 00:00:25.195604 1572 log.go:172] (0xc00068a8f0) Data frame received for 3\nI0516 00:00:25.195651 1572 log.go:172] (0xc0005323c0) (3) Data frame handling\nI0516 00:00:25.197092 1572 log.go:172] (0xc00068a8f0) Data frame received for 1\nI0516 00:00:25.197259 1572 log.go:172] (0xc000579d60) (1) Data frame handling\nI0516 00:00:25.197296 1572 log.go:172] (0xc000579d60) (1) Data frame sent\nI0516 00:00:25.197334 1572 log.go:172] (0xc00068a8f0) (0xc000579d60) Stream removed, broadcasting: 1\nI0516 00:00:25.197371 1572 log.go:172] (0xc00068a8f0) Go away received\nI0516 00:00:25.197859 1572 log.go:172] (0xc00068a8f0) (0xc000579d60) Stream removed, broadcasting: 1\nI0516 00:00:25.197883 1572 log.go:172] (0xc00068a8f0) (0xc0005323c0) Stream removed, broadcasting: 3\nI0516 00:00:25.197898 1572 log.go:172] (0xc00068a8f0) (0xc0004a4f00) Stream removed, broadcasting: 5\n" May 16 00:00:25.202: INFO: stdout: "" May 16 00:00:25.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5545 execpod-affinitylq6ws -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31872' May 16 00:00:25.385: INFO: stderr: "I0516 00:00:25.308307 1592 log.go:172] (0xc000ae53f0) (0xc000bc2140) Create stream\nI0516 00:00:25.308346 1592 log.go:172] (0xc000ae53f0) (0xc000bc2140) Stream added, broadcasting: 1\nI0516 00:00:25.312570 1592 log.go:172] (0xc000ae53f0) Reply frame received for 1\nI0516 00:00:25.312600 1592 log.go:172] (0xc000ae53f0) (0xc0006f1e00) Create stream\nI0516 00:00:25.312609 1592 log.go:172] (0xc000ae53f0) (0xc0006f1e00) Stream added, broadcasting: 3\nI0516 00:00:25.313737 1592 log.go:172] (0xc000ae53f0) Reply frame received for 3\nI0516 00:00:25.313786 1592 log.go:172] (0xc000ae53f0) (0xc0004f2140) Create stream\nI0516 00:00:25.313805 1592 log.go:172] (0xc000ae53f0) (0xc0004f2140) Stream added, broadcasting: 5\nI0516 00:00:25.314683 1592 log.go:172] (0xc000ae53f0) Reply frame received for 5\nI0516 00:00:25.378701 1592 log.go:172] (0xc000ae53f0) Data frame received for 5\nI0516 00:00:25.378727 1592 log.go:172] (0xc0004f2140) (5) Data frame handling\nI0516 00:00:25.378741 1592 log.go:172] (0xc0004f2140) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31872\nConnection to 172.17.0.13 31872 port [tcp/31872] succeeded!\nI0516 00:00:25.379124 1592 log.go:172] (0xc000ae53f0) Data frame received for 5\nI0516 00:00:25.379150 1592 log.go:172] (0xc0004f2140) (5) Data frame handling\nI0516 00:00:25.379427 1592 log.go:172] (0xc000ae53f0) Data frame received for 3\nI0516 00:00:25.379439 1592 log.go:172] (0xc0006f1e00) (3) Data frame handling\nI0516 00:00:25.380656 1592 log.go:172] (0xc000ae53f0) Data frame received for 1\nI0516 00:00:25.380673 1592 log.go:172] (0xc000bc2140) (1) Data frame handling\nI0516 00:00:25.380705 1592 log.go:172] (0xc000bc2140) (1) Data frame sent\nI0516 00:00:25.380723 1592 log.go:172] (0xc000ae53f0) (0xc000bc2140) Stream removed, broadcasting: 1\nI0516 00:00:25.380798 1592 log.go:172] (0xc000ae53f0) Go away received\nI0516 00:00:25.381012 1592 log.go:172] (0xc000ae53f0) (0xc000bc2140) Stream removed, broadcasting: 1\nI0516 00:00:25.381027 1592 log.go:172] (0xc000ae53f0) (0xc0006f1e00) Stream removed, broadcasting: 3\nI0516 00:00:25.381036 1592 log.go:172] (0xc000ae53f0) (0xc0004f2140) Stream removed, broadcasting: 5\n" May 16 00:00:25.385: INFO: stdout: "" May 16 00:00:25.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5545 execpod-affinitylq6ws -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31872' May 16 00:00:25.593: INFO: stderr: "I0516 00:00:25.530308 1615 log.go:172] (0xc000a4e0b0) (0xc000686e60) Create stream\nI0516 00:00:25.530373 1615 log.go:172] (0xc000a4e0b0) (0xc000686e60) Stream added, broadcasting: 1\nI0516 00:00:25.532786 1615 log.go:172] (0xc000a4e0b0) Reply frame received for 1\nI0516 00:00:25.532830 1615 log.go:172] (0xc000a4e0b0) (0xc00060c5a0) Create stream\nI0516 00:00:25.532844 1615 log.go:172] (0xc000a4e0b0) (0xc00060c5a0) Stream added, broadcasting: 3\nI0516 00:00:25.533646 1615 log.go:172] (0xc000a4e0b0) Reply frame received for 3\nI0516 00:00:25.533685 1615 log.go:172] (0xc000a4e0b0) (0xc0005e0280) Create stream\nI0516 00:00:25.533712 1615 log.go:172] (0xc000a4e0b0) (0xc0005e0280) Stream added, broadcasting: 5\nI0516 00:00:25.534393 1615 log.go:172] (0xc000a4e0b0) Reply frame received for 5\nI0516 00:00:25.586567 1615 log.go:172] (0xc000a4e0b0) Data frame received for 5\nI0516 00:00:25.586601 1615 log.go:172] (0xc0005e0280) (5) Data frame handling\nI0516 00:00:25.586620 1615 log.go:172] (0xc0005e0280) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31872\nI0516 00:00:25.587168 1615 log.go:172] (0xc000a4e0b0) Data frame received for 5\nI0516 00:00:25.587194 1615 log.go:172] (0xc0005e0280) (5) Data frame handling\nI0516 00:00:25.587230 1615 log.go:172] (0xc0005e0280) (5) Data frame sent\nI0516 00:00:25.587255 1615 log.go:172] (0xc000a4e0b0) Data frame received for 3\nI0516 00:00:25.587274 1615 log.go:172] (0xc00060c5a0) (3) Data frame handling\nConnection to 172.17.0.12 31872 port [tcp/31872] succeeded!\nI0516 00:00:25.587441 1615 log.go:172] (0xc000a4e0b0) Data frame received for 5\nI0516 00:00:25.587456 1615 log.go:172] (0xc0005e0280) (5) Data frame handling\nI0516 00:00:25.588875 1615 log.go:172] (0xc000a4e0b0) Data frame received for 1\nI0516 00:00:25.588892 1615 log.go:172] (0xc000686e60) (1) Data frame handling\nI0516 00:00:25.588912 1615 log.go:172] (0xc000686e60) (1) Data frame sent\nI0516 00:00:25.588928 1615 log.go:172] (0xc000a4e0b0) (0xc000686e60) Stream removed, broadcasting: 1\nI0516 00:00:25.589000 1615 log.go:172] (0xc000a4e0b0) Go away received\nI0516 00:00:25.589400 1615 log.go:172] (0xc000a4e0b0) (0xc000686e60) Stream removed, broadcasting: 1\nI0516 00:00:25.589417 1615 log.go:172] (0xc000a4e0b0) (0xc00060c5a0) Stream removed, broadcasting: 3\nI0516 00:00:25.589426 1615 log.go:172] (0xc000a4e0b0) (0xc0005e0280) Stream removed, broadcasting: 5\n" May 16 00:00:25.593: INFO: stdout: "" May 16 00:00:25.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5545 execpod-affinitylq6ws -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31872/ ; done' May 16 00:00:26.083: INFO: stderr: "I0516 00:00:25.846928 1634 log.go:172] (0xc0009b91e0) (0xc000b666e0) Create stream\nI0516 00:00:25.846971 1634 log.go:172] (0xc0009b91e0) (0xc000b666e0) Stream added, broadcasting: 1\nI0516 00:00:25.850785 1634 log.go:172] (0xc0009b91e0) Reply frame received for 1\nI0516 00:00:25.850831 1634 log.go:172] (0xc0009b91e0) (0xc000828e60) Create stream\nI0516 00:00:25.850854 1634 log.go:172] (0xc0009b91e0) (0xc000828e60) Stream added, broadcasting: 3\nI0516 00:00:25.851604 1634 log.go:172] (0xc0009b91e0) Reply frame received for 3\nI0516 00:00:25.851645 1634 log.go:172] (0xc0009b91e0) (0xc0005dc1e0) Create stream\nI0516 00:00:25.851677 1634 log.go:172] (0xc0009b91e0) (0xc0005dc1e0) Stream added, broadcasting: 5\nI0516 00:00:25.852491 1634 log.go:172] (0xc0009b91e0) Reply frame received for 5\nI0516 00:00:25.912064 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:25.912103 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:25.912134 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ seq 0 15\nI0516 00:00:25.943286 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:25.943334 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:25.943355 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:25.943425 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:25.943460 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:25.943485 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\nI0516 00:00:25.943501 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:25.943517 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:25.943542 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\nI0516 00:00:25.982478 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:25.982509 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:25.982543 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:25.982704 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:25.982722 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:25.982749 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:25.982767 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:25.982776 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:25.982784 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\nI0516 00:00:25.982797 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:25.982834 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:25.982857 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:25.988577 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:25.988594 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:25.988622 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:25.989059 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:25.989079 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:25.989092 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:25.989389 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:25.989409 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:25.989435 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:25.995972 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:25.995999 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:25.996027 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:25.996642 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:25.996668 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:25.996680 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:25.996698 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:25.996712 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:25.996723 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.002453 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.002478 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.002498 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.003188 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.003236 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.003254 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.003271 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.003281 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.003290 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.008258 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.008283 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.008299 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.009654 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.009684 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.009698 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.009715 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.009732 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.009744 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.013712 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.013726 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.013735 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.014471 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.014484 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.014492 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.014526 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.014561 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.014603 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.021012 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.021051 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.021086 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.021268 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.021290 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.021301 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.021318 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.021331 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\nI0516 00:00:26.021352 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.024477 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.024506 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.024539 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.025046 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.025071 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.025084 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\nI0516 00:00:26.025099 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.025108 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.025416 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.025427 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\nI0516 00:00:26.025447 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.025454 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.031143 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.031168 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.031190 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.031934 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.031954 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.031979 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -qI0516 00:00:26.032254 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.032282 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.032307 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.032325 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.032348 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.032359 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.035713 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.035735 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.035754 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.036454 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.036481 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.036492 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.036504 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.036510 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.036518 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.043747 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.043765 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.043787 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.044353 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.044365 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.044372 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.044384 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.044396 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.044411 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.049921 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.049951 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.049980 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.050547 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.050559 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.050571 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.050586 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.050595 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.050608 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.056559 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.056574 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.056585 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.057005 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.057018 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.057030 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.057321 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.057343 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.057359 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.065495 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.065519 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.065542 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.065985 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.066002 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.066017 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0516 00:00:26.066030 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.066060 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.066074 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\n http://172.17.0.13:31872/\nI0516 00:00:26.066087 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.066095 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.066104 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.070146 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.070167 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.070183 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.070681 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.070694 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.070701 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\nI0516 00:00:26.070707 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.070716 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.070728 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.070766 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.070778 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.070788 1634 log.go:172] (0xc0005dc1e0) (5) Data frame sent\nI0516 00:00:26.075363 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.075382 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.075399 1634 log.go:172] (0xc000828e60) (3) Data frame sent\nI0516 00:00:26.076064 1634 log.go:172] (0xc0009b91e0) Data frame received for 3\nI0516 00:00:26.076086 1634 log.go:172] (0xc000828e60) (3) Data frame handling\nI0516 00:00:26.076108 1634 log.go:172] (0xc0009b91e0) Data frame received for 5\nI0516 00:00:26.076122 1634 log.go:172] (0xc0005dc1e0) (5) Data frame handling\nI0516 00:00:26.077793 1634 log.go:172] (0xc0009b91e0) Data frame received for 1\nI0516 00:00:26.077811 1634 log.go:172] (0xc000b666e0) (1) Data frame handling\nI0516 00:00:26.077825 1634 log.go:172] (0xc000b666e0) (1) Data frame sent\nI0516 00:00:26.077835 1634 log.go:172] (0xc0009b91e0) (0xc000b666e0) Stream removed, broadcasting: 1\nI0516 00:00:26.077951 1634 log.go:172] (0xc0009b91e0) Go away received\nI0516 00:00:26.078074 1634 log.go:172] (0xc0009b91e0) (0xc000b666e0) Stream removed, broadcasting: 1\nI0516 00:00:26.078087 1634 log.go:172] (0xc0009b91e0) (0xc000828e60) Stream removed, broadcasting: 3\nI0516 00:00:26.078094 1634 log.go:172] (0xc0009b91e0) (0xc0005dc1e0) Stream removed, broadcasting: 5\n" May 16 00:00:26.083: INFO: stdout: "\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-fq5sr\naffinity-nodeport-transition-8x4hz\naffinity-nodeport-transition-8x4hz\naffinity-nodeport-transition-fq5sr\naffinity-nodeport-transition-fq5sr\naffinity-nodeport-transition-fq5sr\naffinity-nodeport-transition-8x4hz\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-8x4hz\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-fq5sr\naffinity-nodeport-transition-lnh48" May 16 00:00:26.083: INFO: Received response from host: May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-fq5sr May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-8x4hz May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-8x4hz May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-fq5sr May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-fq5sr May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-fq5sr May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-8x4hz May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-8x4hz May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-fq5sr May 16 00:00:26.083: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5545 execpod-affinitylq6ws -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31872/ ; done' May 16 00:00:26.365: INFO: stderr: "I0516 00:00:26.232953 1654 log.go:172] (0xc000922fd0) (0xc00070c640) Create stream\nI0516 00:00:26.233005 1654 log.go:172] (0xc000922fd0) (0xc00070c640) Stream added, broadcasting: 1\nI0516 00:00:26.235935 1654 log.go:172] (0xc000922fd0) Reply frame received for 1\nI0516 00:00:26.235984 1654 log.go:172] (0xc000922fd0) (0xc00071cf00) Create stream\nI0516 00:00:26.235997 1654 log.go:172] (0xc000922fd0) (0xc00071cf00) Stream added, broadcasting: 3\nI0516 00:00:26.236927 1654 log.go:172] (0xc000922fd0) Reply frame received for 3\nI0516 00:00:26.236951 1654 log.go:172] (0xc000922fd0) (0xc00070cfa0) Create stream\nI0516 00:00:26.236959 1654 log.go:172] (0xc000922fd0) (0xc00070cfa0) Stream added, broadcasting: 5\nI0516 00:00:26.238293 1654 log.go:172] (0xc000922fd0) Reply frame received for 5\nI0516 00:00:26.278740 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.278767 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.278776 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.278799 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.278806 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.278811 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.282447 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.282469 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.282486 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.283277 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.283299 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.283310 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.283321 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.283326 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.283339 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.288884 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.288926 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.288953 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.289081 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.289099 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.289282 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.290298 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.290323 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.290365 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.303013 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.303049 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.303062 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.303079 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.303088 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.303097 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.303104 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.303111 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.303128 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.306668 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.306679 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.306688 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.306976 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.306991 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.306998 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.307006 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.307011 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.307017 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.310114 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.310125 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.310134 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.310427 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.310450 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.310457 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.310466 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.310474 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.310478 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.314458 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.314474 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.314499 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.314839 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.314853 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.314860 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.314868 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.314874 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.314879 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.317964 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.317981 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.318001 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.318284 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.318305 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.318312 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.318327 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.318347 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.318370 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.321399 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.321455 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.321475 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.321831 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.321850 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.321858 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.321868 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.321876 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.321901 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.325990 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.326009 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.326035 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.326421 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.326449 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.326464 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.326478 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.326486 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.326493 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.329933 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.329947 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.329959 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.330437 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.330465 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.330487 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.330510 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.330524 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.330540 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.334682 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.334701 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.334711 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.335120 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.335146 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.335165 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.335181 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.335206 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.335238 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0516 00:00:26.335258 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.335276 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.335296 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n 2 http://172.17.0.13:31872/\nI0516 00:00:26.339504 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.339521 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.339538 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.339877 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.339901 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.339920 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.339946 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.339964 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.339974 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.345643 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.345672 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.345769 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.346213 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.346234 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.346243 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.346278 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.346298 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.346324 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.349467 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.349485 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.349524 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.350141 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.350159 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.350174 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.350232 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.350246 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.350255 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.355370 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.355380 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.355386 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.355825 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.355835 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.355841 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.355896 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.355934 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.355969 1654 log.go:172] (0xc00070cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31872/\nI0516 00:00:26.358884 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.358901 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.358916 1654 log.go:172] (0xc00071cf00) (3) Data frame sent\nI0516 00:00:26.359615 1654 log.go:172] (0xc000922fd0) Data frame received for 3\nI0516 00:00:26.359633 1654 log.go:172] (0xc00071cf00) (3) Data frame handling\nI0516 00:00:26.359852 1654 log.go:172] (0xc000922fd0) Data frame received for 5\nI0516 00:00:26.359867 1654 log.go:172] (0xc00070cfa0) (5) Data frame handling\nI0516 00:00:26.361431 1654 log.go:172] (0xc000922fd0) Data frame received for 1\nI0516 00:00:26.361447 1654 log.go:172] (0xc00070c640) (1) Data frame handling\nI0516 00:00:26.361460 1654 log.go:172] (0xc00070c640) (1) Data frame sent\nI0516 00:00:26.361490 1654 log.go:172] (0xc000922fd0) (0xc00070c640) Stream removed, broadcasting: 1\nI0516 00:00:26.361658 1654 log.go:172] (0xc000922fd0) Go away received\nI0516 00:00:26.361754 1654 log.go:172] (0xc000922fd0) (0xc00070c640) Stream removed, broadcasting: 1\nI0516 00:00:26.361768 1654 log.go:172] (0xc000922fd0) (0xc00071cf00) Stream removed, broadcasting: 3\nI0516 00:00:26.361773 1654 log.go:172] (0xc000922fd0) (0xc00070cfa0) Stream removed, broadcasting: 5\n" May 16 00:00:26.366: INFO: stdout: "\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48\naffinity-nodeport-transition-lnh48" May 16 00:00:26.366: INFO: Received response from host: May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Received response from host: affinity-nodeport-transition-lnh48 May 16 00:00:26.366: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5545, will wait for the garbage collector to delete the pods May 16 00:00:26.890: INFO: Deleting ReplicationController affinity-nodeport-transition took: 372.888433ms May 16 00:00:27.390: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.235461ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:00:35.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5545" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.975 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":73,"skipped":1225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:00:35.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 16 00:00:35.451: INFO: Waiting up to 5m0s for pod "client-containers-53dfab9b-2079-4ccd-af4a-e56d076e3143" in namespace "containers-6136" to be "Succeeded or Failed" May 16 00:00:35.456: INFO: Pod "client-containers-53dfab9b-2079-4ccd-af4a-e56d076e3143": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493606ms May 16 00:00:37.563: INFO: Pod "client-containers-53dfab9b-2079-4ccd-af4a-e56d076e3143": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112158408s May 16 00:00:39.567: INFO: Pod "client-containers-53dfab9b-2079-4ccd-af4a-e56d076e3143": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115802898s STEP: Saw pod success May 16 00:00:39.567: INFO: Pod "client-containers-53dfab9b-2079-4ccd-af4a-e56d076e3143" satisfied condition "Succeeded or Failed" May 16 00:00:39.570: INFO: Trying to get logs from node latest-worker pod client-containers-53dfab9b-2079-4ccd-af4a-e56d076e3143 container test-container: STEP: delete the pod May 16 00:00:39.648: INFO: Waiting for pod client-containers-53dfab9b-2079-4ccd-af4a-e56d076e3143 to disappear May 16 00:00:39.652: INFO: Pod client-containers-53dfab9b-2079-4ccd-af4a-e56d076e3143 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:00:39.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6136" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":74,"skipped":1257,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:00:39.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:00:40.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 16 00:00:40.640: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T00:00:40Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-16T00:00:40Z]] name:name1 resourceVersion:5003206 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4e9da8c6-8b73-497d-9d61-13232763a91d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 16 00:00:50.645: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T00:00:50Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-16T00:00:50Z]] name:name2 resourceVersion:5003255 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4fafd633-e74a-4887-9495-9e14123643d2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 16 00:01:00.652: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T00:00:40Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-16T00:01:00Z]] name:name1 resourceVersion:5003294 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4e9da8c6-8b73-497d-9d61-13232763a91d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 16 00:01:10.659: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T00:00:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-16T00:01:10Z]] name:name2 resourceVersion:5003333 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4fafd633-e74a-4887-9495-9e14123643d2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 16 00:01:20.666: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T00:00:40Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-16T00:01:00Z]] name:name1 resourceVersion:5003368 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4e9da8c6-8b73-497d-9d61-13232763a91d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 16 00:01:30.675: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T00:00:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-16T00:01:10Z]] name:name2 resourceVersion:5003406 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4fafd633-e74a-4887-9495-9e14123643d2] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:01:41.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9812" for this suite. • [SLOW TEST:61.506 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":75,"skipped":1276,"failed":0} [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:01:41.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 16 00:03:42.037: INFO: Successfully updated pod "var-expansion-5d339367-5dc5-4c8e-84cc-86c31549fbdc" STEP: waiting for pod running STEP: deleting the pod gracefully May 16 00:03:44.070: INFO: Deleting pod "var-expansion-5d339367-5dc5-4c8e-84cc-86c31549fbdc" in namespace "var-expansion-2899" May 16 00:03:44.076: INFO: Wait up to 5m0s for pod "var-expansion-5d339367-5dc5-4c8e-84cc-86c31549fbdc" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:04:26.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2899" for this suite. • [SLOW TEST:164.939 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":76,"skipped":1276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:04:26.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-c5dda240-43e9-47cf-8a41-af1914808120 STEP: updating the pod May 16 00:04:34.943: INFO: Successfully updated pod "var-expansion-c5dda240-43e9-47cf-8a41-af1914808120" STEP: waiting for pod and container restart STEP: Failing liveness probe May 16 00:04:34.984: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-7877 PodName:var-expansion-c5dda240-43e9-47cf-8a41-af1914808120 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:04:34.984: INFO: >>> kubeConfig: /root/.kube/config I0516 00:04:35.019476 7 log.go:172] (0xc002850420) (0xc002719b80) Create stream I0516 00:04:35.019523 7 log.go:172] (0xc002850420) (0xc002719b80) Stream added, broadcasting: 1 I0516 00:04:35.021558 7 log.go:172] (0xc002850420) Reply frame received for 1 I0516 00:04:35.021589 7 log.go:172] (0xc002850420) (0xc0025bd4a0) Create stream I0516 00:04:35.021602 7 log.go:172] (0xc002850420) (0xc0025bd4a0) Stream added, broadcasting: 3 I0516 00:04:35.022703 7 log.go:172] (0xc002850420) Reply frame received for 3 I0516 00:04:35.022724 7 log.go:172] (0xc002850420) (0xc001f91d60) Create stream I0516 00:04:35.022742 7 log.go:172] (0xc002850420) (0xc001f91d60) Stream added, broadcasting: 5 I0516 00:04:35.023681 7 log.go:172] (0xc002850420) Reply frame received for 5 I0516 00:04:35.110282 7 log.go:172] (0xc002850420) Data frame received for 3 I0516 00:04:35.110327 7 log.go:172] (0xc0025bd4a0) (3) Data frame handling I0516 00:04:35.110356 7 log.go:172] (0xc002850420) Data frame received for 5 I0516 00:04:35.110369 7 log.go:172] (0xc001f91d60) (5) Data frame handling I0516 00:04:35.112177 7 log.go:172] (0xc002850420) Data frame received for 1 I0516 00:04:35.112242 7 log.go:172] (0xc002719b80) (1) Data frame handling I0516 00:04:35.112267 7 log.go:172] (0xc002719b80) (1) Data frame sent I0516 00:04:35.112371 7 log.go:172] (0xc002850420) (0xc002719b80) Stream removed, broadcasting: 1 I0516 00:04:35.112404 7 log.go:172] (0xc002850420) Go away received I0516 00:04:35.112904 7 log.go:172] (0xc002850420) (0xc002719b80) Stream removed, broadcasting: 1 I0516 00:04:35.112935 7 log.go:172] (0xc002850420) (0xc0025bd4a0) Stream removed, broadcasting: 3 I0516 00:04:35.112946 7 log.go:172] (0xc002850420) (0xc001f91d60) Stream removed, broadcasting: 5 May 16 00:04:35.112: INFO: Pod exec output: / STEP: Waiting for container to restart May 16 00:04:35.116: INFO: Container dapi-container, restarts: 0 May 16 00:04:45.135: INFO: Container dapi-container, restarts: 0 May 16 00:04:55.121: INFO: Container dapi-container, restarts: 0 May 16 00:05:05.120: INFO: Container dapi-container, restarts: 0 May 16 00:05:15.120: INFO: Container dapi-container, restarts: 1 May 16 00:05:15.120: INFO: Container has restart count: 1 STEP: Rewriting the file May 16 00:05:15.120: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-7877 PodName:var-expansion-c5dda240-43e9-47cf-8a41-af1914808120 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:05:15.120: INFO: >>> kubeConfig: /root/.kube/config I0516 00:05:15.146363 7 log.go:172] (0xc0029d0210) (0xc0025bdae0) Create stream I0516 00:05:15.146386 7 log.go:172] (0xc0029d0210) (0xc0025bdae0) Stream added, broadcasting: 1 I0516 00:05:15.147793 7 log.go:172] (0xc0029d0210) Reply frame received for 1 I0516 00:05:15.147817 7 log.go:172] (0xc0029d0210) (0xc0025bdb80) Create stream I0516 00:05:15.147826 7 log.go:172] (0xc0029d0210) (0xc0025bdb80) Stream added, broadcasting: 3 I0516 00:05:15.148647 7 log.go:172] (0xc0029d0210) Reply frame received for 3 I0516 00:05:15.148673 7 log.go:172] (0xc0029d0210) (0xc001b3c140) Create stream I0516 00:05:15.148685 7 log.go:172] (0xc0029d0210) (0xc001b3c140) Stream added, broadcasting: 5 I0516 00:05:15.149899 7 log.go:172] (0xc0029d0210) Reply frame received for 5 I0516 00:05:15.210496 7 log.go:172] (0xc0029d0210) Data frame received for 5 I0516 00:05:15.210583 7 log.go:172] (0xc001b3c140) (5) Data frame handling I0516 00:05:15.210662 7 log.go:172] (0xc0029d0210) Data frame received for 3 I0516 00:05:15.210694 7 log.go:172] (0xc0025bdb80) (3) Data frame handling I0516 00:05:15.211845 7 log.go:172] (0xc0029d0210) Data frame received for 1 I0516 00:05:15.211862 7 log.go:172] (0xc0025bdae0) (1) Data frame handling I0516 00:05:15.211881 7 log.go:172] (0xc0025bdae0) (1) Data frame sent I0516 00:05:15.211893 7 log.go:172] (0xc0029d0210) (0xc0025bdae0) Stream removed, broadcasting: 1 I0516 00:05:15.211972 7 log.go:172] (0xc0029d0210) (0xc0025bdae0) Stream removed, broadcasting: 1 I0516 00:05:15.211983 7 log.go:172] (0xc0029d0210) (0xc0025bdb80) Stream removed, broadcasting: 3 I0516 00:05:15.212046 7 log.go:172] (0xc0029d0210) Go away received I0516 00:05:15.212209 7 log.go:172] (0xc0029d0210) (0xc001b3c140) Stream removed, broadcasting: 5 May 16 00:05:15.212: INFO: Exec stderr: "" May 16 00:05:15.212: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 16 00:05:43.220: INFO: Container has restart count: 2 May 16 00:06:45.220: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 16 00:06:45.249: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-7877 PodName:var-expansion-c5dda240-43e9-47cf-8a41-af1914808120 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:06:45.249: INFO: >>> kubeConfig: /root/.kube/config I0516 00:06:45.291716 7 log.go:172] (0xc002b0d3f0) (0xc002718460) Create stream I0516 00:06:45.291740 7 log.go:172] (0xc002b0d3f0) (0xc002718460) Stream added, broadcasting: 1 I0516 00:06:45.293876 7 log.go:172] (0xc002b0d3f0) Reply frame received for 1 I0516 00:06:45.293912 7 log.go:172] (0xc002b0d3f0) (0xc0025bc000) Create stream I0516 00:06:45.293938 7 log.go:172] (0xc002b0d3f0) (0xc0025bc000) Stream added, broadcasting: 3 I0516 00:06:45.295372 7 log.go:172] (0xc002b0d3f0) Reply frame received for 3 I0516 00:06:45.295413 7 log.go:172] (0xc002b0d3f0) (0xc001b3c000) Create stream I0516 00:06:45.295422 7 log.go:172] (0xc002b0d3f0) (0xc001b3c000) Stream added, broadcasting: 5 I0516 00:06:45.296715 7 log.go:172] (0xc002b0d3f0) Reply frame received for 5 I0516 00:06:45.354642 7 log.go:172] (0xc002b0d3f0) Data frame received for 5 I0516 00:06:45.354687 7 log.go:172] (0xc001b3c000) (5) Data frame handling I0516 00:06:45.354753 7 log.go:172] (0xc002b0d3f0) Data frame received for 3 I0516 00:06:45.354805 7 log.go:172] (0xc0025bc000) (3) Data frame handling I0516 00:06:45.355942 7 log.go:172] (0xc002b0d3f0) Data frame received for 1 I0516 00:06:45.356002 7 log.go:172] (0xc002718460) (1) Data frame handling I0516 00:06:45.356050 7 log.go:172] (0xc002718460) (1) Data frame sent I0516 00:06:45.356077 7 log.go:172] (0xc002b0d3f0) (0xc002718460) Stream removed, broadcasting: 1 I0516 00:06:45.356102 7 log.go:172] (0xc002b0d3f0) Go away received I0516 00:06:45.356234 7 log.go:172] (0xc002b0d3f0) (0xc002718460) Stream removed, broadcasting: 1 I0516 00:06:45.356255 7 log.go:172] (0xc002b0d3f0) (0xc0025bc000) Stream removed, broadcasting: 3 I0516 00:06:45.356271 7 log.go:172] (0xc002b0d3f0) (0xc001b3c000) Stream removed, broadcasting: 5 May 16 00:06:45.360: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-7877 PodName:var-expansion-c5dda240-43e9-47cf-8a41-af1914808120 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:06:45.360: INFO: >>> kubeConfig: /root/.kube/config I0516 00:06:45.384512 7 log.go:172] (0xc0028504d0) (0xc0025bcaa0) Create stream I0516 00:06:45.384537 7 log.go:172] (0xc0028504d0) (0xc0025bcaa0) Stream added, broadcasting: 1 I0516 00:06:45.386204 7 log.go:172] (0xc0028504d0) Reply frame received for 1 I0516 00:06:45.386258 7 log.go:172] (0xc0028504d0) (0xc002584be0) Create stream I0516 00:06:45.386278 7 log.go:172] (0xc0028504d0) (0xc002584be0) Stream added, broadcasting: 3 I0516 00:06:45.387232 7 log.go:172] (0xc0028504d0) Reply frame received for 3 I0516 00:06:45.387273 7 log.go:172] (0xc0028504d0) (0xc002718500) Create stream I0516 00:06:45.387291 7 log.go:172] (0xc0028504d0) (0xc002718500) Stream added, broadcasting: 5 I0516 00:06:45.388143 7 log.go:172] (0xc0028504d0) Reply frame received for 5 I0516 00:06:45.455623 7 log.go:172] (0xc0028504d0) Data frame received for 5 I0516 00:06:45.455700 7 log.go:172] (0xc002718500) (5) Data frame handling I0516 00:06:45.455756 7 log.go:172] (0xc0028504d0) Data frame received for 3 I0516 00:06:45.455804 7 log.go:172] (0xc002584be0) (3) Data frame handling I0516 00:06:45.457430 7 log.go:172] (0xc0028504d0) Data frame received for 1 I0516 00:06:45.457492 7 log.go:172] (0xc0025bcaa0) (1) Data frame handling I0516 00:06:45.457528 7 log.go:172] (0xc0025bcaa0) (1) Data frame sent I0516 00:06:45.457800 7 log.go:172] (0xc0028504d0) (0xc0025bcaa0) Stream removed, broadcasting: 1 I0516 00:06:45.457876 7 log.go:172] (0xc0028504d0) Go away received I0516 00:06:45.457917 7 log.go:172] (0xc0028504d0) (0xc0025bcaa0) Stream removed, broadcasting: 1 I0516 00:06:45.457937 7 log.go:172] (0xc0028504d0) (0xc002584be0) Stream removed, broadcasting: 3 I0516 00:06:45.457953 7 log.go:172] (0xc0028504d0) (0xc002718500) Stream removed, broadcasting: 5 May 16 00:06:45.457: INFO: Deleting pod "var-expansion-c5dda240-43e9-47cf-8a41-af1914808120" in namespace "var-expansion-7877" May 16 00:06:45.463: INFO: Wait up to 5m0s for pod "var-expansion-c5dda240-43e9-47cf-8a41-af1914808120" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:07:25.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7877" for this suite. • [SLOW TEST:179.363 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":77,"skipped":1301,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:07:25.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1340 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-1340 May 16 00:07:25.657: INFO: Found 0 stateful pods, waiting for 1 May 16 00:07:35.662: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 16 00:07:35.694: INFO: Deleting all statefulset in ns statefulset-1340 May 16 00:07:35.707: INFO: Scaling statefulset ss to 0 May 16 00:07:55.752: INFO: Waiting for statefulset status.replicas updated to 0 May 16 00:07:55.754: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:07:55.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1340" for this suite. • [SLOW TEST:30.275 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":78,"skipped":1306,"failed":0} [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:07:55.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7887 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7887 STEP: creating replication controller externalsvc in namespace services-7887 I0516 00:07:56.160053 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7887, replica count: 2 I0516 00:07:59.210420 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 00:08:02.210669 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 16 00:08:02.276: INFO: Creating new exec pod May 16 00:08:06.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7887 execpod6g9wz -- /bin/sh -x -c nslookup nodeport-service' May 16 00:08:06.502: INFO: stderr: "I0516 00:08:06.409529 1675 log.go:172] (0xc0009ad550) (0xc0009c0280) Create stream\nI0516 00:08:06.409563 1675 log.go:172] (0xc0009ad550) (0xc0009c0280) Stream added, broadcasting: 1\nI0516 00:08:06.413989 1675 log.go:172] (0xc0009ad550) Reply frame received for 1\nI0516 00:08:06.414032 1675 log.go:172] (0xc0009ad550) (0xc000826d20) Create stream\nI0516 00:08:06.414044 1675 log.go:172] (0xc0009ad550) (0xc000826d20) Stream added, broadcasting: 3\nI0516 00:08:06.417294 1675 log.go:172] (0xc0009ad550) Reply frame received for 3\nI0516 00:08:06.417340 1675 log.go:172] (0xc0009ad550) (0xc00081e5a0) Create stream\nI0516 00:08:06.417357 1675 log.go:172] (0xc0009ad550) (0xc00081e5a0) Stream added, broadcasting: 5\nI0516 00:08:06.418112 1675 log.go:172] (0xc0009ad550) Reply frame received for 5\nI0516 00:08:06.487486 1675 log.go:172] (0xc0009ad550) Data frame received for 5\nI0516 00:08:06.487506 1675 log.go:172] (0xc00081e5a0) (5) Data frame handling\nI0516 00:08:06.487517 1675 log.go:172] (0xc00081e5a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0516 00:08:06.494873 1675 log.go:172] (0xc0009ad550) Data frame received for 3\nI0516 00:08:06.494894 1675 log.go:172] (0xc000826d20) (3) Data frame handling\nI0516 00:08:06.494911 1675 log.go:172] (0xc000826d20) (3) Data frame sent\nI0516 00:08:06.495692 1675 log.go:172] (0xc0009ad550) Data frame received for 3\nI0516 00:08:06.495702 1675 log.go:172] (0xc000826d20) (3) Data frame handling\nI0516 00:08:06.495713 1675 log.go:172] (0xc000826d20) (3) Data frame sent\nI0516 00:08:06.496292 1675 log.go:172] (0xc0009ad550) Data frame received for 5\nI0516 00:08:06.496304 1675 log.go:172] (0xc00081e5a0) (5) Data frame handling\nI0516 00:08:06.496324 1675 log.go:172] (0xc0009ad550) Data frame received for 3\nI0516 00:08:06.496331 1675 log.go:172] (0xc000826d20) (3) Data frame handling\nI0516 00:08:06.497948 1675 log.go:172] (0xc0009ad550) Data frame received for 1\nI0516 00:08:06.497968 1675 log.go:172] (0xc0009c0280) (1) Data frame handling\nI0516 00:08:06.497985 1675 log.go:172] (0xc0009c0280) (1) Data frame sent\nI0516 00:08:06.498009 1675 log.go:172] (0xc0009ad550) (0xc0009c0280) Stream removed, broadcasting: 1\nI0516 00:08:06.498025 1675 log.go:172] (0xc0009ad550) Go away received\nI0516 00:08:06.498354 1675 log.go:172] (0xc0009ad550) (0xc0009c0280) Stream removed, broadcasting: 1\nI0516 00:08:06.498372 1675 log.go:172] (0xc0009ad550) (0xc000826d20) Stream removed, broadcasting: 3\nI0516 00:08:06.498381 1675 log.go:172] (0xc0009ad550) (0xc00081e5a0) Stream removed, broadcasting: 5\n" May 16 00:08:06.502: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7887.svc.cluster.local\tcanonical name = externalsvc.services-7887.svc.cluster.local.\nName:\texternalsvc.services-7887.svc.cluster.local\nAddress: 10.96.149.113\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7887, will wait for the garbage collector to delete the pods May 16 00:08:06.561: INFO: Deleting ReplicationController externalsvc took: 5.58458ms May 16 00:08:06.661: INFO: Terminating ReplicationController externalsvc pods took: 100.210654ms May 16 00:08:15.313: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:08:15.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7887" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:19.593 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":79,"skipped":1306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:08:15.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4409f657-7945-43d4-a871-9dffd7caf2a8 STEP: Creating a pod to test consume secrets May 16 00:08:15.596: INFO: Waiting up to 5m0s for pod "pod-secrets-6dc5cd9b-a429-4363-bafa-b8465a2efcba" in namespace "secrets-9761" to be "Succeeded or Failed" May 16 00:08:15.600: INFO: Pod "pod-secrets-6dc5cd9b-a429-4363-bafa-b8465a2efcba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124646ms May 16 00:08:17.801: INFO: Pod "pod-secrets-6dc5cd9b-a429-4363-bafa-b8465a2efcba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204729431s May 16 00:08:19.803: INFO: Pod "pod-secrets-6dc5cd9b-a429-4363-bafa-b8465a2efcba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.20742657s STEP: Saw pod success May 16 00:08:19.803: INFO: Pod "pod-secrets-6dc5cd9b-a429-4363-bafa-b8465a2efcba" satisfied condition "Succeeded or Failed" May 16 00:08:19.806: INFO: Trying to get logs from node latest-worker pod pod-secrets-6dc5cd9b-a429-4363-bafa-b8465a2efcba container secret-volume-test: STEP: delete the pod May 16 00:08:19.845: INFO: Waiting for pod pod-secrets-6dc5cd9b-a429-4363-bafa-b8465a2efcba to disappear May 16 00:08:19.851: INFO: Pod pod-secrets-6dc5cd9b-a429-4363-bafa-b8465a2efcba no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:08:19.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9761" for this suite. STEP: Destroying namespace "secret-namespace-5790" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":80,"skipped":1342,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:08:19.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-ss5k STEP: Creating a pod to test atomic-volume-subpath May 16 00:08:20.002: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ss5k" in namespace "subpath-9273" to be "Succeeded or Failed" May 16 00:08:20.058: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Pending", Reason="", readiness=false. Elapsed: 55.797706ms May 16 00:08:22.171: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168077353s May 16 00:08:24.175: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 4.172105687s May 16 00:08:26.179: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 6.176220536s May 16 00:08:28.183: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 8.180556719s May 16 00:08:30.188: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 10.185167478s May 16 00:08:32.268: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 12.265788608s May 16 00:08:34.274: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 14.271127324s May 16 00:08:36.277: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 16.274945931s May 16 00:08:38.284: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 18.28153174s May 16 00:08:40.310: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 20.307148296s May 16 00:08:42.314: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 22.311873688s May 16 00:08:44.318: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Running", Reason="", readiness=true. Elapsed: 24.315843886s May 16 00:08:46.322: INFO: Pod "pod-subpath-test-configmap-ss5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.319491161s STEP: Saw pod success May 16 00:08:46.322: INFO: Pod "pod-subpath-test-configmap-ss5k" satisfied condition "Succeeded or Failed" May 16 00:08:46.325: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-ss5k container test-container-subpath-configmap-ss5k: STEP: delete the pod May 16 00:08:46.413: INFO: Waiting for pod pod-subpath-test-configmap-ss5k to disappear May 16 00:08:46.421: INFO: Pod pod-subpath-test-configmap-ss5k no longer exists STEP: Deleting pod pod-subpath-test-configmap-ss5k May 16 00:08:46.421: INFO: Deleting pod "pod-subpath-test-configmap-ss5k" in namespace "subpath-9273" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:08:46.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9273" for this suite. • [SLOW TEST:26.533 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":81,"skipped":1344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:08:46.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0516 00:08:56.571343 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 00:08:56.571: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:08:56.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3738" for this suite. • [SLOW TEST:10.144 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":82,"skipped":1385,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:08:56.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-a742d339-ffdb-4198-998e-806c09464c0d in namespace container-probe-2981 May 16 00:09:00.858: INFO: Started pod liveness-a742d339-ffdb-4198-998e-806c09464c0d in namespace container-probe-2981 STEP: checking the pod's current state and verifying that restartCount is present May 16 00:09:00.861: INFO: Initial restart count of pod liveness-a742d339-ffdb-4198-998e-806c09464c0d is 0 May 16 00:09:17.376: INFO: Restart count of pod container-probe-2981/liveness-a742d339-ffdb-4198-998e-806c09464c0d is now 1 (16.514808007s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:09:17.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2981" for this suite. • [SLOW TEST:20.955 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":83,"skipped":1397,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:09:17.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6544 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 16 00:09:18.004: INFO: Found 0 stateful pods, waiting for 3 May 16 00:09:28.009: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 00:09:28.009: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 00:09:28.009: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 16 00:09:38.009: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 00:09:38.009: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 00:09:38.009: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 16 00:09:38.040: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 16 00:09:48.075: INFO: Updating stateful set ss2 May 16 00:09:48.115: INFO: Waiting for Pod statefulset-6544/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 16 00:09:58.730: INFO: Found 2 stateful pods, waiting for 3 May 16 00:10:08.736: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 00:10:08.736: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 00:10:08.736: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 16 00:10:08.763: INFO: Updating stateful set ss2 May 16 00:10:08.827: INFO: Waiting for Pod statefulset-6544/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 16 00:10:18.855: INFO: Updating stateful set ss2 May 16 00:10:18.878: INFO: Waiting for StatefulSet statefulset-6544/ss2 to complete update May 16 00:10:18.878: INFO: Waiting for Pod statefulset-6544/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 16 00:10:28.887: INFO: Deleting all statefulset in ns statefulset-6544 May 16 00:10:28.890: INFO: Scaling statefulset ss2 to 0 May 16 00:10:58.915: INFO: Waiting for statefulset status.replicas updated to 0 May 16 00:10:58.919: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:10:58.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6544" for this suite. • [SLOW TEST:101.409 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":84,"skipped":1412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:10:58.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-aaed38a5-8458-4f11-8450-531052dce26b STEP: Creating a pod to test consume secrets May 16 00:10:59.027: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee6a03c7-2a82-41b8-88de-522e8934b9b1" in namespace "projected-1763" to be "Succeeded or Failed" May 16 00:10:59.040: INFO: Pod "pod-projected-secrets-ee6a03c7-2a82-41b8-88de-522e8934b9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.185433ms May 16 00:11:01.046: INFO: Pod "pod-projected-secrets-ee6a03c7-2a82-41b8-88de-522e8934b9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018356093s May 16 00:11:03.078: INFO: Pod "pod-projected-secrets-ee6a03c7-2a82-41b8-88de-522e8934b9b1": Phase="Running", Reason="", readiness=true. Elapsed: 4.050949158s May 16 00:11:05.089: INFO: Pod "pod-projected-secrets-ee6a03c7-2a82-41b8-88de-522e8934b9b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06166907s STEP: Saw pod success May 16 00:11:05.089: INFO: Pod "pod-projected-secrets-ee6a03c7-2a82-41b8-88de-522e8934b9b1" satisfied condition "Succeeded or Failed" May 16 00:11:05.091: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-ee6a03c7-2a82-41b8-88de-522e8934b9b1 container projected-secret-volume-test: STEP: delete the pod May 16 00:11:05.270: INFO: Waiting for pod pod-projected-secrets-ee6a03c7-2a82-41b8-88de-522e8934b9b1 to disappear May 16 00:11:05.306: INFO: Pod pod-projected-secrets-ee6a03c7-2a82-41b8-88de-522e8934b9b1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:11:05.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1763" for this suite. • [SLOW TEST:6.369 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1438,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:11:05.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-31ff528e-c582-4983-84d3-33cf36865d06 STEP: Creating a pod to test consume configMaps May 16 00:11:05.607: INFO: Waiting up to 5m0s for pod "pod-configmaps-888ee81f-1d33-43a0-b85b-bcb676c77a7d" in namespace "configmap-3737" to be "Succeeded or Failed" May 16 00:11:05.688: INFO: Pod "pod-configmaps-888ee81f-1d33-43a0-b85b-bcb676c77a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 80.843154ms May 16 00:11:07.748: INFO: Pod "pod-configmaps-888ee81f-1d33-43a0-b85b-bcb676c77a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141076114s May 16 00:11:09.753: INFO: Pod "pod-configmaps-888ee81f-1d33-43a0-b85b-bcb676c77a7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146012837s STEP: Saw pod success May 16 00:11:09.753: INFO: Pod "pod-configmaps-888ee81f-1d33-43a0-b85b-bcb676c77a7d" satisfied condition "Succeeded or Failed" May 16 00:11:09.756: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-888ee81f-1d33-43a0-b85b-bcb676c77a7d container configmap-volume-test: STEP: delete the pod May 16 00:11:09.811: INFO: Waiting for pod pod-configmaps-888ee81f-1d33-43a0-b85b-bcb676c77a7d to disappear May 16 00:11:09.839: INFO: Pod pod-configmaps-888ee81f-1d33-43a0-b85b-bcb676c77a7d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:11:09.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3737" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1440,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:11:09.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:11:10.757: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:11:13.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184670, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184670, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184670, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184670, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:11:15.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184670, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184670, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184670, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184670, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:11:18.041: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 16 00:11:18.061: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:11:18.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-225" for this suite. STEP: Destroying namespace "webhook-225-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.285 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":87,"skipped":1448,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:11:18.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:11:18.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5190" for this suite. STEP: Destroying namespace "nspatchtest-3fd21cbb-fd81-488e-8028-9bb5b471a189-3896" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":88,"skipped":1467,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:11:18.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 16 00:11:19.129: INFO: Waiting up to 5m0s for pod "pod-ba5202af-371f-436d-a322-5472516e49b0" in namespace "emptydir-6112" to be "Succeeded or Failed" May 16 00:11:19.188: INFO: Pod "pod-ba5202af-371f-436d-a322-5472516e49b0": Phase="Pending", Reason="", readiness=false. Elapsed: 58.204896ms May 16 00:11:21.235: INFO: Pod "pod-ba5202af-371f-436d-a322-5472516e49b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106111226s May 16 00:11:23.241: INFO: Pod "pod-ba5202af-371f-436d-a322-5472516e49b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112087999s STEP: Saw pod success May 16 00:11:23.241: INFO: Pod "pod-ba5202af-371f-436d-a322-5472516e49b0" satisfied condition "Succeeded or Failed" May 16 00:11:23.254: INFO: Trying to get logs from node latest-worker2 pod pod-ba5202af-371f-436d-a322-5472516e49b0 container test-container: STEP: delete the pod May 16 00:11:23.303: INFO: Waiting for pod pod-ba5202af-371f-436d-a322-5472516e49b0 to disappear May 16 00:11:23.371: INFO: Pod pod-ba5202af-371f-436d-a322-5472516e49b0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:11:23.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6112" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:11:23.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:11:23.447: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 16 00:11:26.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3414 create -f -' May 16 00:11:30.069: INFO: stderr: "" May 16 00:11:30.069: INFO: stdout: "e2e-test-crd-publish-openapi-9950-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 16 00:11:30.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3414 delete e2e-test-crd-publish-openapi-9950-crds test-foo' May 16 00:11:30.169: INFO: stderr: "" May 16 00:11:30.169: INFO: stdout: "e2e-test-crd-publish-openapi-9950-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 16 00:11:30.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3414 apply -f -' May 16 00:11:30.421: INFO: stderr: "" May 16 00:11:30.422: INFO: stdout: "e2e-test-crd-publish-openapi-9950-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 16 00:11:30.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3414 delete e2e-test-crd-publish-openapi-9950-crds test-foo' May 16 00:11:30.526: INFO: stderr: "" May 16 00:11:30.527: INFO: stdout: "e2e-test-crd-publish-openapi-9950-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 16 00:11:30.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3414 create -f -' May 16 00:11:30.748: INFO: rc: 1 May 16 00:11:30.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3414 apply -f -' May 16 00:11:30.969: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 16 00:11:30.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3414 create -f -' May 16 00:11:31.190: INFO: rc: 1 May 16 00:11:31.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3414 apply -f -' May 16 00:11:31.431: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 16 00:11:31.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9950-crds' May 16 00:11:31.676: INFO: stderr: "" May 16 00:11:31.677: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9950-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 16 00:11:31.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9950-crds.metadata' May 16 00:11:31.905: INFO: stderr: "" May 16 00:11:31.905: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9950-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 16 00:11:31.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9950-crds.spec' May 16 00:11:32.177: INFO: stderr: "" May 16 00:11:32.177: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9950-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 16 00:11:32.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9950-crds.spec.bars' May 16 00:11:32.466: INFO: stderr: "" May 16 00:11:32.466: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9950-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 16 00:11:32.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9950-crds.spec.bars2' May 16 00:11:32.747: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:11:35.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3414" for this suite. • [SLOW TEST:12.288 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":90,"skipped":1500,"failed":0} S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:11:35.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:11:40.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6991" for this suite. • [SLOW TEST:5.149 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":91,"skipped":1501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:11:40.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 16 00:11:45.472: INFO: Successfully updated pod "labelsupdatedf6ae817-78a0-4f4d-a05f-fd095b03342a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:11:49.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5914" for this suite. • [SLOW TEST:8.697 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1549,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:11:49.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:12:07.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1949" for this suite. • [SLOW TEST:18.147 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":93,"skipped":1563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:12:07.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 16 00:12:07.766: INFO: Waiting up to 5m0s for pod "downward-api-5b572f7b-5ff5-4c8a-9830-43458190babb" in namespace "downward-api-237" to be "Succeeded or Failed" May 16 00:12:07.776: INFO: Pod "downward-api-5b572f7b-5ff5-4c8a-9830-43458190babb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.205854ms May 16 00:12:09.928: INFO: Pod "downward-api-5b572f7b-5ff5-4c8a-9830-43458190babb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161839317s May 16 00:12:11.932: INFO: Pod "downward-api-5b572f7b-5ff5-4c8a-9830-43458190babb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165609275s STEP: Saw pod success May 16 00:12:11.932: INFO: Pod "downward-api-5b572f7b-5ff5-4c8a-9830-43458190babb" satisfied condition "Succeeded or Failed" May 16 00:12:11.935: INFO: Trying to get logs from node latest-worker2 pod downward-api-5b572f7b-5ff5-4c8a-9830-43458190babb container dapi-container: STEP: delete the pod May 16 00:12:12.273: INFO: Waiting for pod downward-api-5b572f7b-5ff5-4c8a-9830-43458190babb to disappear May 16 00:12:12.285: INFO: Pod downward-api-5b572f7b-5ff5-4c8a-9830-43458190babb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:12:12.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-237" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":94,"skipped":1602,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:12:12.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 16 00:12:12.408: INFO: PodSpec: initContainers in spec.initContainers May 16 00:13:06.378: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-519ea493-2af0-4639-b9d0-e8a75a7ec77a", GenerateName:"", Namespace:"init-container-7972", SelfLink:"/api/v1/namespaces/init-container-7972/pods/pod-init-519ea493-2af0-4639-b9d0-e8a75a7ec77a", UID:"e7f83640-8c7d-439c-bc5e-c67e79b65866", ResourceVersion:"5007104", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725184732, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"408347445"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00272cb20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00272cb40)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00272cb60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00272cb80)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-h9x2x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0056c52c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h9x2x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h9x2x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h9x2x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002abad98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002966a10), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002abae30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002abae50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002abae58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002abae5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184732, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184732, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184732, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725184732, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.108", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.108"}}, StartTime:(*v1.Time)(0xc00272cba0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002966af0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002966bd0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://7003ecdc8e98e7365ead68a8222558fbdfe5cf2d7d1aa6606efe8f8b02f9a9a9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00272cbe0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00272cbc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002abaedf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:06.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7972" for this suite. • [SLOW TEST:54.117 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":95,"skipped":1607,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:06.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 00:13:06.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14ebcb08-a8a1-4c91-8886-d24e948c3544" in namespace "downward-api-7776" to be "Succeeded or Failed" May 16 00:13:06.519: INFO: Pod "downwardapi-volume-14ebcb08-a8a1-4c91-8886-d24e948c3544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719461ms May 16 00:13:08.582: INFO: Pod "downwardapi-volume-14ebcb08-a8a1-4c91-8886-d24e948c3544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06585695s May 16 00:13:10.587: INFO: Pod "downwardapi-volume-14ebcb08-a8a1-4c91-8886-d24e948c3544": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070790422s STEP: Saw pod success May 16 00:13:10.587: INFO: Pod "downwardapi-volume-14ebcb08-a8a1-4c91-8886-d24e948c3544" satisfied condition "Succeeded or Failed" May 16 00:13:10.591: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-14ebcb08-a8a1-4c91-8886-d24e948c3544 container client-container: STEP: delete the pod May 16 00:13:10.622: INFO: Waiting for pod downwardapi-volume-14ebcb08-a8a1-4c91-8886-d24e948c3544 to disappear May 16 00:13:10.634: INFO: Pod downwardapi-volume-14ebcb08-a8a1-4c91-8886-d24e948c3544 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:10.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7776" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":96,"skipped":1610,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:10.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:15.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3405" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":97,"skipped":1624,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:15.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 16 00:13:15.493: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 00:13:15.501: INFO: Waiting for terminating namespaces to be deleted... May 16 00:13:15.503: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 16 00:13:15.507: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 16 00:13:15.507: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 16 00:13:15.507: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 16 00:13:15.507: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 16 00:13:15.507: INFO: pod-init-519ea493-2af0-4639-b9d0-e8a75a7ec77a from init-container-7972 started at 2020-05-16 00:12:12 +0000 UTC (1 container statuses recorded) May 16 00:13:15.507: INFO: Container run1 ready: false, restart count 0 May 16 00:13:15.507: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 00:13:15.507: INFO: Container kindnet-cni ready: true, restart count 0 May 16 00:13:15.507: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 00:13:15.507: INFO: Container kube-proxy ready: true, restart count 0 May 16 00:13:15.507: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 16 00:13:15.512: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 16 00:13:15.512: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 16 00:13:15.512: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 16 00:13:15.512: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 16 00:13:15.512: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 00:13:15.512: INFO: Container kindnet-cni ready: true, restart count 0 May 16 00:13:15.512: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 00:13:15.512: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f59dc24e90592], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f59dc2652831b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:16.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8256" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":98,"skipped":1634,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:16.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:13:16.710: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 16 00:13:18.904: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:19.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9254" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":99,"skipped":1638,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:19.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-b3d736c7-0d35-4eab-9905-761951ccaae4 STEP: Creating secret with name secret-projected-all-test-volume-09c60700-70e2-48a0-9cd4-186acf6153ac STEP: Creating a pod to test Check all projections for projected volume plugin May 16 00:13:20.860: INFO: Waiting up to 5m0s for pod "projected-volume-3467cd73-7c5b-4f7c-b348-c270ed1b0140" in namespace "projected-40" to be "Succeeded or Failed" May 16 00:13:21.086: INFO: Pod "projected-volume-3467cd73-7c5b-4f7c-b348-c270ed1b0140": Phase="Pending", Reason="", readiness=false. Elapsed: 225.826952ms May 16 00:13:23.327: INFO: Pod "projected-volume-3467cd73-7c5b-4f7c-b348-c270ed1b0140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466646488s May 16 00:13:25.331: INFO: Pod "projected-volume-3467cd73-7c5b-4f7c-b348-c270ed1b0140": Phase="Running", Reason="", readiness=true. Elapsed: 4.470833865s May 16 00:13:27.335: INFO: Pod "projected-volume-3467cd73-7c5b-4f7c-b348-c270ed1b0140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.474766416s STEP: Saw pod success May 16 00:13:27.335: INFO: Pod "projected-volume-3467cd73-7c5b-4f7c-b348-c270ed1b0140" satisfied condition "Succeeded or Failed" May 16 00:13:27.338: INFO: Trying to get logs from node latest-worker pod projected-volume-3467cd73-7c5b-4f7c-b348-c270ed1b0140 container projected-all-volume-test: STEP: delete the pod May 16 00:13:27.636: INFO: Waiting for pod projected-volume-3467cd73-7c5b-4f7c-b348-c270ed1b0140 to disappear May 16 00:13:27.639: INFO: Pod projected-volume-3467cd73-7c5b-4f7c-b348-c270ed1b0140 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:27.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-40" for this suite. • [SLOW TEST:7.714 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1653,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:27.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:13:27.798: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4139fc89-9214-4869-8427-e7a351a4bf67", Controller:(*bool)(0xc0055473e2), BlockOwnerDeletion:(*bool)(0xc0055473e3)}} May 16 00:13:27.838: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8998d962-332f-4e6d-8634-7e88b5c316b6", Controller:(*bool)(0xc0035e2b0a), BlockOwnerDeletion:(*bool)(0xc0035e2b0b)}} May 16 00:13:27.953: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"be5383d3-2088-4c82-aee6-181605f878cc", Controller:(*bool)(0xc003569352), BlockOwnerDeletion:(*bool)(0xc003569353)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:33.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9907" for this suite. • [SLOW TEST:5.409 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":101,"skipped":1661,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:33.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-322/configmap-test-97152e75-94b3-4a00-9175-3bfcf5b7bfbd STEP: Creating a pod to test consume configMaps May 16 00:13:33.196: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e637b7c-efc3-4348-bbab-1898a695a4d4" in namespace "configmap-322" to be "Succeeded or Failed" May 16 00:13:33.215: INFO: Pod "pod-configmaps-5e637b7c-efc3-4348-bbab-1898a695a4d4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.780873ms May 16 00:13:35.219: INFO: Pod "pod-configmaps-5e637b7c-efc3-4348-bbab-1898a695a4d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02272415s May 16 00:13:37.223: INFO: Pod "pod-configmaps-5e637b7c-efc3-4348-bbab-1898a695a4d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026372352s STEP: Saw pod success May 16 00:13:37.223: INFO: Pod "pod-configmaps-5e637b7c-efc3-4348-bbab-1898a695a4d4" satisfied condition "Succeeded or Failed" May 16 00:13:37.225: INFO: Trying to get logs from node latest-worker pod pod-configmaps-5e637b7c-efc3-4348-bbab-1898a695a4d4 container env-test: STEP: delete the pod May 16 00:13:37.274: INFO: Waiting for pod pod-configmaps-5e637b7c-efc3-4348-bbab-1898a695a4d4 to disappear May 16 00:13:37.287: INFO: Pod pod-configmaps-5e637b7c-efc3-4348-bbab-1898a695a4d4 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:37.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-322" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1662,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:37.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 16 00:13:37.423: INFO: Waiting up to 5m0s for pod "pod-4ea5f56d-896d-412b-978e-cbad80e8fa16" in namespace "emptydir-6482" to be "Succeeded or Failed" May 16 00:13:37.437: INFO: Pod "pod-4ea5f56d-896d-412b-978e-cbad80e8fa16": Phase="Pending", Reason="", readiness=false. Elapsed: 14.088009ms May 16 00:13:39.442: INFO: Pod "pod-4ea5f56d-896d-412b-978e-cbad80e8fa16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01862018s May 16 00:13:41.446: INFO: Pod "pod-4ea5f56d-896d-412b-978e-cbad80e8fa16": Phase="Running", Reason="", readiness=true. Elapsed: 4.02282662s May 16 00:13:43.450: INFO: Pod "pod-4ea5f56d-896d-412b-978e-cbad80e8fa16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026644758s STEP: Saw pod success May 16 00:13:43.450: INFO: Pod "pod-4ea5f56d-896d-412b-978e-cbad80e8fa16" satisfied condition "Succeeded or Failed" May 16 00:13:43.452: INFO: Trying to get logs from node latest-worker pod pod-4ea5f56d-896d-412b-978e-cbad80e8fa16 container test-container: STEP: delete the pod May 16 00:13:43.480: INFO: Waiting for pod pod-4ea5f56d-896d-412b-978e-cbad80e8fa16 to disappear May 16 00:13:43.509: INFO: Pod pod-4ea5f56d-896d-412b-978e-cbad80e8fa16 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:43.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6482" for this suite. • [SLOW TEST:6.220 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1668,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:43.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 16 00:13:43.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1536' May 16 00:13:43.678: INFO: stderr: "" May 16 00:13:43.678: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 16 00:13:48.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1536 -o json' May 16 00:13:48.830: INFO: stderr: "" May 16 00:13:48.830: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-16T00:13:43Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-16T00:13:43Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.115\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-16T00:13:47Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1536\",\n \"resourceVersion\": \"5007610\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1536/pods/e2e-test-httpd-pod\",\n \"uid\": \"3bada3f8-276b-404f-b58f-f5ec178e1ae3\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-2pxz5\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-2pxz5\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-2pxz5\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T00:13:43Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T00:13:47Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T00:13:47Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T00:13:43Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://fa84ca4fcd71ba035b0780f9c1d835d779d9dcd34a6d1bc8a2195f20818ffe4b\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-16T00:13:46Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.115\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.115\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-16T00:13:43Z\"\n }\n}\n" STEP: replace the image in the pod May 16 00:13:48.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1536' May 16 00:13:49.194: INFO: stderr: "" May 16 00:13:49.194: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 16 00:13:49.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1536' May 16 00:13:53.535: INFO: stderr: "" May 16 00:13:53.535: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:13:53.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1536" for this suite. • [SLOW TEST:10.033 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":104,"skipped":1669,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:13:53.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 16 00:14:01.730: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 00:14:01.746: INFO: Pod pod-with-poststart-exec-hook still exists May 16 00:14:03.747: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 00:14:03.792: INFO: Pod pod-with-poststart-exec-hook still exists May 16 00:14:05.747: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 00:14:05.751: INFO: Pod pod-with-poststart-exec-hook still exists May 16 00:14:07.747: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 00:14:07.750: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:14:07.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-680" for this suite. • [SLOW TEST:14.209 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":105,"skipped":1677,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:14:07.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:14:07.825: INFO: Creating ReplicaSet my-hostname-basic-d2b825a6-460f-485f-9f33-2e14dd39d460 May 16 00:14:07.858: INFO: Pod name my-hostname-basic-d2b825a6-460f-485f-9f33-2e14dd39d460: Found 0 pods out of 1 May 16 00:14:12.863: INFO: Pod name my-hostname-basic-d2b825a6-460f-485f-9f33-2e14dd39d460: Found 1 pods out of 1 May 16 00:14:12.863: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d2b825a6-460f-485f-9f33-2e14dd39d460" is running May 16 00:14:12.869: INFO: Pod "my-hostname-basic-d2b825a6-460f-485f-9f33-2e14dd39d460-zwgzt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 00:14:07 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 00:14:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 00:14:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 00:14:07 +0000 UTC Reason: Message:}]) May 16 00:14:12.870: INFO: Trying to dial the pod May 16 00:14:17.883: INFO: Controller my-hostname-basic-d2b825a6-460f-485f-9f33-2e14dd39d460: Got expected result from replica 1 [my-hostname-basic-d2b825a6-460f-485f-9f33-2e14dd39d460-zwgzt]: "my-hostname-basic-d2b825a6-460f-485f-9f33-2e14dd39d460-zwgzt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:14:17.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5044" for this suite. • [SLOW TEST:10.134 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":106,"skipped":1678,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:14:17.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:14:18.020: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 16 00:14:18.042: INFO: Number of nodes with available pods: 0 May 16 00:14:18.042: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 16 00:14:18.183: INFO: Number of nodes with available pods: 0 May 16 00:14:18.183: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:19.195: INFO: Number of nodes with available pods: 0 May 16 00:14:19.195: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:20.187: INFO: Number of nodes with available pods: 0 May 16 00:14:20.187: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:21.200: INFO: Number of nodes with available pods: 1 May 16 00:14:21.200: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 16 00:14:21.226: INFO: Number of nodes with available pods: 1 May 16 00:14:21.226: INFO: Number of running nodes: 0, number of available pods: 1 May 16 00:14:22.230: INFO: Number of nodes with available pods: 0 May 16 00:14:22.230: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 16 00:14:22.285: INFO: Number of nodes with available pods: 0 May 16 00:14:22.285: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:23.347: INFO: Number of nodes with available pods: 0 May 16 00:14:23.347: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:24.289: INFO: Number of nodes with available pods: 0 May 16 00:14:24.289: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:25.289: INFO: Number of nodes with available pods: 0 May 16 00:14:25.289: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:26.288: INFO: Number of nodes with available pods: 0 May 16 00:14:26.288: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:27.290: INFO: Number of nodes with available pods: 0 May 16 00:14:27.290: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:28.289: INFO: Number of nodes with available pods: 0 May 16 00:14:28.289: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:29.290: INFO: Number of nodes with available pods: 0 May 16 00:14:29.290: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:30.290: INFO: Number of nodes with available pods: 0 May 16 00:14:30.290: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:31.289: INFO: Number of nodes with available pods: 0 May 16 00:14:31.289: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:32.289: INFO: Number of nodes with available pods: 0 May 16 00:14:32.289: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:33.289: INFO: Number of nodes with available pods: 0 May 16 00:14:33.289: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:34.288: INFO: Number of nodes with available pods: 0 May 16 00:14:34.288: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:35.313: INFO: Number of nodes with available pods: 0 May 16 00:14:35.313: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:36.290: INFO: Number of nodes with available pods: 0 May 16 00:14:36.290: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:37.290: INFO: Number of nodes with available pods: 0 May 16 00:14:37.290: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:14:38.290: INFO: Number of nodes with available pods: 1 May 16 00:14:38.290: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4360, will wait for the garbage collector to delete the pods May 16 00:14:38.356: INFO: Deleting DaemonSet.extensions daemon-set took: 6.377247ms May 16 00:14:38.656: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.250938ms May 16 00:14:45.359: INFO: Number of nodes with available pods: 0 May 16 00:14:45.359: INFO: Number of running nodes: 0, number of available pods: 0 May 16 00:14:45.363: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4360/daemonsets","resourceVersion":"5007979"},"items":null} May 16 00:14:45.365: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4360/pods","resourceVersion":"5007979"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:14:45.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4360" for this suite. • [SLOW TEST:27.532 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":107,"skipped":1699,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:14:45.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 00:14:45.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fe2396b-c0fe-4be2-ad3d-607012dc68ae" in namespace "downward-api-6889" to be "Succeeded or Failed" May 16 00:14:45.495: INFO: Pod "downwardapi-volume-3fe2396b-c0fe-4be2-ad3d-607012dc68ae": Phase="Pending", Reason="", readiness=false. Elapsed: 13.467083ms May 16 00:14:48.159: INFO: Pod "downwardapi-volume-3fe2396b-c0fe-4be2-ad3d-607012dc68ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.676808183s May 16 00:14:50.162: INFO: Pod "downwardapi-volume-3fe2396b-c0fe-4be2-ad3d-607012dc68ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.680594455s STEP: Saw pod success May 16 00:14:50.162: INFO: Pod "downwardapi-volume-3fe2396b-c0fe-4be2-ad3d-607012dc68ae" satisfied condition "Succeeded or Failed" May 16 00:14:50.165: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3fe2396b-c0fe-4be2-ad3d-607012dc68ae container client-container: STEP: delete the pod May 16 00:14:50.326: INFO: Waiting for pod downwardapi-volume-3fe2396b-c0fe-4be2-ad3d-607012dc68ae to disappear May 16 00:14:50.466: INFO: Pod downwardapi-volume-3fe2396b-c0fe-4be2-ad3d-607012dc68ae no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:14:50.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6889" for this suite. • [SLOW TEST:5.057 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":108,"skipped":1705,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:14:50.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-319 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 00:14:50.533: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 16 00:14:50.595: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 00:14:52.672: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 00:14:54.600: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 00:14:56.599: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 00:14:58.599: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 00:15:00.600: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 00:15:02.599: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 00:15:04.600: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 00:15:06.600: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 00:15:08.599: INFO: The status of Pod netserver-0 is Running (Ready = true) May 16 00:15:08.605: INFO: The status of Pod netserver-1 is Running (Ready = false) May 16 00:15:10.608: INFO: The status of Pod netserver-1 is Running (Ready = false) May 16 00:15:12.608: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 16 00:15:18.664: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.117 8081 | grep -v '^\s*$'] Namespace:pod-network-test-319 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:15:18.664: INFO: >>> kubeConfig: /root/.kube/config I0516 00:15:18.699845 7 log.go:172] (0xc00256e420) (0xc0020d9a40) Create stream I0516 00:15:18.699879 7 log.go:172] (0xc00256e420) (0xc0020d9a40) Stream added, broadcasting: 1 I0516 00:15:18.702380 7 log.go:172] (0xc00256e420) Reply frame received for 1 I0516 00:15:18.702411 7 log.go:172] (0xc00256e420) (0xc001e97c20) Create stream I0516 00:15:18.702422 7 log.go:172] (0xc00256e420) (0xc001e97c20) Stream added, broadcasting: 3 I0516 00:15:18.703244 7 log.go:172] (0xc00256e420) Reply frame received for 3 I0516 00:15:18.703292 7 log.go:172] (0xc00256e420) (0xc0020d9ae0) Create stream I0516 00:15:18.703334 7 log.go:172] (0xc00256e420) (0xc0020d9ae0) Stream added, broadcasting: 5 I0516 00:15:18.704234 7 log.go:172] (0xc00256e420) Reply frame received for 5 I0516 00:15:19.793398 7 log.go:172] (0xc00256e420) Data frame received for 3 I0516 00:15:19.793652 7 log.go:172] (0xc001e97c20) (3) Data frame handling I0516 00:15:19.793723 7 log.go:172] (0xc00256e420) Data frame received for 5 I0516 00:15:19.793753 7 log.go:172] (0xc0020d9ae0) (5) Data frame handling I0516 00:15:19.793780 7 log.go:172] (0xc001e97c20) (3) Data frame sent I0516 00:15:19.793796 7 log.go:172] (0xc00256e420) Data frame received for 3 I0516 00:15:19.793816 7 log.go:172] (0xc001e97c20) (3) Data frame handling I0516 00:15:19.795833 7 log.go:172] (0xc00256e420) Data frame received for 1 I0516 00:15:19.795855 7 log.go:172] (0xc0020d9a40) (1) Data frame handling I0516 00:15:19.795871 7 log.go:172] (0xc0020d9a40) (1) Data frame sent I0516 00:15:19.795891 7 log.go:172] (0xc00256e420) (0xc0020d9a40) Stream removed, broadcasting: 1 I0516 00:15:19.795912 7 log.go:172] (0xc00256e420) Go away received I0516 00:15:19.796057 7 log.go:172] (0xc00256e420) (0xc0020d9a40) Stream removed, broadcasting: 1 I0516 00:15:19.796087 7 log.go:172] (0xc00256e420) (0xc001e97c20) Stream removed, broadcasting: 3 I0516 00:15:19.796101 7 log.go:172] (0xc00256e420) (0xc0020d9ae0) Stream removed, broadcasting: 5 May 16 00:15:19.796: INFO: Found all expected endpoints: [netserver-0] May 16 00:15:19.807: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.157 8081 | grep -v '^\s*$'] Namespace:pod-network-test-319 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:15:19.807: INFO: >>> kubeConfig: /root/.kube/config I0516 00:15:19.836820 7 log.go:172] (0xc0028506e0) (0xc00180e000) Create stream I0516 00:15:19.836846 7 log.go:172] (0xc0028506e0) (0xc00180e000) Stream added, broadcasting: 1 I0516 00:15:19.838627 7 log.go:172] (0xc0028506e0) Reply frame received for 1 I0516 00:15:19.838651 7 log.go:172] (0xc0028506e0) (0xc0020d9b80) Create stream I0516 00:15:19.838665 7 log.go:172] (0xc0028506e0) (0xc0020d9b80) Stream added, broadcasting: 3 I0516 00:15:19.839557 7 log.go:172] (0xc0028506e0) Reply frame received for 3 I0516 00:15:19.839585 7 log.go:172] (0xc0028506e0) (0xc001b3c000) Create stream I0516 00:15:19.839597 7 log.go:172] (0xc0028506e0) (0xc001b3c000) Stream added, broadcasting: 5 I0516 00:15:19.840397 7 log.go:172] (0xc0028506e0) Reply frame received for 5 I0516 00:15:20.918185 7 log.go:172] (0xc0028506e0) Data frame received for 3 I0516 00:15:20.918229 7 log.go:172] (0xc0020d9b80) (3) Data frame handling I0516 00:15:20.918252 7 log.go:172] (0xc0020d9b80) (3) Data frame sent I0516 00:15:20.918565 7 log.go:172] (0xc0028506e0) Data frame received for 3 I0516 00:15:20.918587 7 log.go:172] (0xc0020d9b80) (3) Data frame handling I0516 00:15:20.918654 7 log.go:172] (0xc0028506e0) Data frame received for 5 I0516 00:15:20.918685 7 log.go:172] (0xc001b3c000) (5) Data frame handling I0516 00:15:20.921730 7 log.go:172] (0xc0028506e0) Data frame received for 1 I0516 00:15:20.921822 7 log.go:172] (0xc00180e000) (1) Data frame handling I0516 00:15:20.921902 7 log.go:172] (0xc00180e000) (1) Data frame sent I0516 00:15:20.921944 7 log.go:172] (0xc0028506e0) (0xc00180e000) Stream removed, broadcasting: 1 I0516 00:15:20.921976 7 log.go:172] (0xc0028506e0) Go away received I0516 00:15:20.922150 7 log.go:172] (0xc0028506e0) (0xc00180e000) Stream removed, broadcasting: 1 I0516 00:15:20.922200 7 log.go:172] (0xc0028506e0) (0xc0020d9b80) Stream removed, broadcasting: 3 I0516 00:15:20.922269 7 log.go:172] (0xc0028506e0) (0xc001b3c000) Stream removed, broadcasting: 5 May 16 00:15:20.922: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:15:20.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-319" for this suite. • [SLOW TEST:30.448 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1717,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:15:20.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-6219 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6219 STEP: Deleting pre-stop pod May 16 00:15:36.148: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:15:36.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6219" for this suite. • [SLOW TEST:15.281 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":110,"skipped":1724,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:15:36.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 00:15:40.944: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:15:40.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7702" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":111,"skipped":1730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:15:40.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:15:57.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9218" for this suite. • [SLOW TEST:16.449 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":112,"skipped":1783,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:15:57.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:15:57.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5382" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":113,"skipped":1796,"failed":0} ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:15:57.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-e1051e2c-5bd1-4e94-b9f7-45fd26cada7b STEP: Creating secret with name s-test-opt-upd-1e1eecd8-1ed1-4281-979c-389b6dc81484 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e1051e2c-5bd1-4e94-b9f7-45fd26cada7b STEP: Updating secret s-test-opt-upd-1e1eecd8-1ed1-4281-979c-389b6dc81484 STEP: Creating secret with name s-test-opt-create-5b60bb10-c62b-4ffb-9cae-bf032d69c081 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:17:16.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1910" for this suite. • [SLOW TEST:78.813 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":1796,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:17:16.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:17:16.858: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Pending, waiting for it to be Running (with Ready = true) May 16 00:17:18.952: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Pending, waiting for it to be Running (with Ready = true) May 16 00:17:20.862: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:22.862: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:24.862: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:26.862: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:28.863: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:30.862: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:32.862: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:34.861: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:36.895: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:38.862: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = false) May 16 00:17:40.863: INFO: The status of Pod test-webserver-736a4725-2e3a-4d27-a0a9-0a22a60e3a59 is Running (Ready = true) May 16 00:17:40.866: INFO: Container started at 2020-05-16 00:17:19 +0000 UTC, pod became ready at 2020-05-16 00:17:39 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:17:40.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9148" for this suite. • [SLOW TEST:24.144 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":1816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:17:40.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:17:40.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2908" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":116,"skipped":1859,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:17:41.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:17:41.571: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:17:43.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185061, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185061, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185061, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185061, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:17:45.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185061, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185061, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185061, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185061, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:17:48.805: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:17:49.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5382" for this suite. STEP: Destroying namespace "webhook-5382-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.376 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":117,"skipped":1876,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:17:49.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 16 00:17:49.482: INFO: Waiting up to 5m0s for pod "client-containers-ad5ea5e6-1637-4aa5-8790-14280f185fb0" in namespace "containers-9893" to be "Succeeded or Failed" May 16 00:17:49.500: INFO: Pod "client-containers-ad5ea5e6-1637-4aa5-8790-14280f185fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.427395ms May 16 00:17:51.503: INFO: Pod "client-containers-ad5ea5e6-1637-4aa5-8790-14280f185fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021156028s May 16 00:17:53.507: INFO: Pod "client-containers-ad5ea5e6-1637-4aa5-8790-14280f185fb0": Phase="Running", Reason="", readiness=true. Elapsed: 4.024568058s May 16 00:17:55.511: INFO: Pod "client-containers-ad5ea5e6-1637-4aa5-8790-14280f185fb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028876893s STEP: Saw pod success May 16 00:17:55.511: INFO: Pod "client-containers-ad5ea5e6-1637-4aa5-8790-14280f185fb0" satisfied condition "Succeeded or Failed" May 16 00:17:55.514: INFO: Trying to get logs from node latest-worker pod client-containers-ad5ea5e6-1637-4aa5-8790-14280f185fb0 container test-container: STEP: delete the pod May 16 00:17:55.559: INFO: Waiting for pod client-containers-ad5ea5e6-1637-4aa5-8790-14280f185fb0 to disappear May 16 00:17:55.572: INFO: Pod client-containers-ad5ea5e6-1637-4aa5-8790-14280f185fb0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:17:55.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9893" for this suite. • [SLOW TEST:6.223 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":118,"skipped":1890,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:17:55.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:17:56.256: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:17:58.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:18:00.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:18:02.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185076, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:18:05.299: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:18:05.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4607-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:18:06.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6337" for this suite. STEP: Destroying namespace "webhook-6337-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.935 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":119,"skipped":1902,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:18:06.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 00:18:11.722: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:18:11.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7774" for this suite. • [SLOW TEST:5.240 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":1916,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:18:11.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:18:28.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4823" for this suite. • [SLOW TEST:17.118 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":121,"skipped":1920,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:18:28.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:18:29.023: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:18:30.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2907" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":122,"skipped":1935,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:18:30.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 16 00:18:30.286: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix230471312/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:18:30.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2362" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":123,"skipped":1942,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:18:30.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:18:30.533: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:18:31.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3715" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":124,"skipped":1953,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:18:31.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3567 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3567 STEP: creating replication controller externalsvc in namespace services-3567 I0516 00:18:32.183394 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3567, replica count: 2 I0516 00:18:35.233785 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 00:18:38.234005 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 16 00:18:38.319: INFO: Creating new exec pod May 16 00:18:42.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3567 execpodpvz64 -- /bin/sh -x -c nslookup clusterip-service' May 16 00:18:42.663: INFO: stderr: "I0516 00:18:42.561868 2080 log.go:172] (0xc000996e70) (0xc000a84780) Create stream\nI0516 00:18:42.561914 2080 log.go:172] (0xc000996e70) (0xc000a84780) Stream added, broadcasting: 1\nI0516 00:18:42.564855 2080 log.go:172] (0xc000996e70) Reply frame received for 1\nI0516 00:18:42.564886 2080 log.go:172] (0xc000996e70) (0xc00071e6e0) Create stream\nI0516 00:18:42.564900 2080 log.go:172] (0xc000996e70) (0xc00071e6e0) Stream added, broadcasting: 3\nI0516 00:18:42.565792 2080 log.go:172] (0xc000996e70) Reply frame received for 3\nI0516 00:18:42.565830 2080 log.go:172] (0xc000996e70) (0xc0006dcdc0) Create stream\nI0516 00:18:42.565840 2080 log.go:172] (0xc000996e70) (0xc0006dcdc0) Stream added, broadcasting: 5\nI0516 00:18:42.566738 2080 log.go:172] (0xc000996e70) Reply frame received for 5\nI0516 00:18:42.647889 2080 log.go:172] (0xc000996e70) Data frame received for 5\nI0516 00:18:42.647912 2080 log.go:172] (0xc0006dcdc0) (5) Data frame handling\nI0516 00:18:42.647930 2080 log.go:172] (0xc0006dcdc0) (5) Data frame sent\n+ nslookup clusterip-service\nI0516 00:18:42.656482 2080 log.go:172] (0xc000996e70) Data frame received for 3\nI0516 00:18:42.656517 2080 log.go:172] (0xc00071e6e0) (3) Data frame handling\nI0516 00:18:42.656534 2080 log.go:172] (0xc00071e6e0) (3) Data frame sent\nI0516 00:18:42.657104 2080 log.go:172] (0xc000996e70) Data frame received for 3\nI0516 00:18:42.657198 2080 log.go:172] (0xc00071e6e0) (3) Data frame handling\nI0516 00:18:42.657206 2080 log.go:172] (0xc00071e6e0) (3) Data frame sent\nI0516 00:18:42.657568 2080 log.go:172] (0xc000996e70) Data frame received for 3\nI0516 00:18:42.657584 2080 log.go:172] (0xc00071e6e0) (3) Data frame handling\nI0516 00:18:42.657807 2080 log.go:172] (0xc000996e70) Data frame received for 5\nI0516 00:18:42.657822 2080 log.go:172] (0xc0006dcdc0) (5) Data frame handling\nI0516 00:18:42.659211 2080 log.go:172] (0xc000996e70) Data frame received for 1\nI0516 00:18:42.659226 2080 log.go:172] (0xc000a84780) (1) Data frame handling\nI0516 00:18:42.659240 2080 log.go:172] (0xc000a84780) (1) Data frame sent\nI0516 00:18:42.659250 2080 log.go:172] (0xc000996e70) (0xc000a84780) Stream removed, broadcasting: 1\nI0516 00:18:42.659344 2080 log.go:172] (0xc000996e70) Go away received\nI0516 00:18:42.659521 2080 log.go:172] (0xc000996e70) (0xc000a84780) Stream removed, broadcasting: 1\nI0516 00:18:42.659536 2080 log.go:172] (0xc000996e70) (0xc00071e6e0) Stream removed, broadcasting: 3\nI0516 00:18:42.659542 2080 log.go:172] (0xc000996e70) (0xc0006dcdc0) Stream removed, broadcasting: 5\n" May 16 00:18:42.663: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3567.svc.cluster.local\tcanonical name = externalsvc.services-3567.svc.cluster.local.\nName:\texternalsvc.services-3567.svc.cluster.local\nAddress: 10.111.60.56\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3567, will wait for the garbage collector to delete the pods May 16 00:18:42.721: INFO: Deleting ReplicationController externalsvc took: 5.072034ms May 16 00:18:43.121: INFO: Terminating ReplicationController externalsvc pods took: 400.430057ms May 16 00:18:55.043: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:18:55.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3567" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.373 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":125,"skipped":1966,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:18:55.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 16 00:18:55.440: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:19:11.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5576" for this suite. • [SLOW TEST:15.692 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":126,"skipped":1971,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:19:11.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 00:19:11.114: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c6bed38-8f42-49d6-811f-eb441024be51" in namespace "projected-9710" to be "Succeeded or Failed" May 16 00:19:11.124: INFO: Pod "downwardapi-volume-4c6bed38-8f42-49d6-811f-eb441024be51": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008615ms May 16 00:19:13.194: INFO: Pod "downwardapi-volume-4c6bed38-8f42-49d6-811f-eb441024be51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080194642s May 16 00:19:15.219: INFO: Pod "downwardapi-volume-4c6bed38-8f42-49d6-811f-eb441024be51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105271136s STEP: Saw pod success May 16 00:19:15.219: INFO: Pod "downwardapi-volume-4c6bed38-8f42-49d6-811f-eb441024be51" satisfied condition "Succeeded or Failed" May 16 00:19:15.221: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4c6bed38-8f42-49d6-811f-eb441024be51 container client-container: STEP: delete the pod May 16 00:19:15.395: INFO: Waiting for pod downwardapi-volume-4c6bed38-8f42-49d6-811f-eb441024be51 to disappear May 16 00:19:15.438: INFO: Pod downwardapi-volume-4c6bed38-8f42-49d6-811f-eb441024be51 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:19:15.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9710" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":127,"skipped":1990,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:19:15.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-5885 STEP: creating replication controller nodeport-test in namespace services-5885 I0516 00:19:15.812552 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5885, replica count: 2 I0516 00:19:18.862927 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 00:19:21.863167 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 00:19:21.863: INFO: Creating new exec pod May 16 00:19:26.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5885 execpodj5bmq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 16 00:19:27.144: INFO: stderr: "I0516 00:19:27.035583 2100 log.go:172] (0xc000936000) (0xc000381040) Create stream\nI0516 00:19:27.035641 2100 log.go:172] (0xc000936000) (0xc000381040) Stream added, broadcasting: 1\nI0516 00:19:27.037746 2100 log.go:172] (0xc000936000) Reply frame received for 1\nI0516 00:19:27.037777 2100 log.go:172] (0xc000936000) (0xc000449d60) Create stream\nI0516 00:19:27.037790 2100 log.go:172] (0xc000936000) (0xc000449d60) Stream added, broadcasting: 3\nI0516 00:19:27.038749 2100 log.go:172] (0xc000936000) Reply frame received for 3\nI0516 00:19:27.038790 2100 log.go:172] (0xc000936000) (0xc0000f3860) Create stream\nI0516 00:19:27.038803 2100 log.go:172] (0xc000936000) (0xc0000f3860) Stream added, broadcasting: 5\nI0516 00:19:27.039770 2100 log.go:172] (0xc000936000) Reply frame received for 5\nI0516 00:19:27.135777 2100 log.go:172] (0xc000936000) Data frame received for 5\nI0516 00:19:27.135804 2100 log.go:172] (0xc0000f3860) (5) Data frame handling\nI0516 00:19:27.135819 2100 log.go:172] (0xc0000f3860) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0516 00:19:27.136135 2100 log.go:172] (0xc000936000) Data frame received for 5\nI0516 00:19:27.136157 2100 log.go:172] (0xc0000f3860) (5) Data frame handling\nI0516 00:19:27.136169 2100 log.go:172] (0xc0000f3860) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0516 00:19:27.136463 2100 log.go:172] (0xc000936000) Data frame received for 3\nI0516 00:19:27.136479 2100 log.go:172] (0xc000449d60) (3) Data frame handling\nI0516 00:19:27.136650 2100 log.go:172] (0xc000936000) Data frame received for 5\nI0516 00:19:27.136663 2100 log.go:172] (0xc0000f3860) (5) Data frame handling\nI0516 00:19:27.138297 2100 log.go:172] (0xc000936000) Data frame received for 1\nI0516 00:19:27.138332 2100 log.go:172] (0xc000381040) (1) Data frame handling\nI0516 00:19:27.138350 2100 log.go:172] (0xc000381040) (1) Data frame sent\nI0516 00:19:27.138378 2100 log.go:172] (0xc000936000) (0xc000381040) Stream removed, broadcasting: 1\nI0516 00:19:27.138418 2100 log.go:172] (0xc000936000) Go away received\nI0516 00:19:27.138796 2100 log.go:172] (0xc000936000) (0xc000381040) Stream removed, broadcasting: 1\nI0516 00:19:27.138824 2100 log.go:172] (0xc000936000) (0xc000449d60) Stream removed, broadcasting: 3\nI0516 00:19:27.138843 2100 log.go:172] (0xc000936000) (0xc0000f3860) Stream removed, broadcasting: 5\n" May 16 00:19:27.144: INFO: stdout: "" May 16 00:19:27.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5885 execpodj5bmq -- /bin/sh -x -c nc -zv -t -w 2 10.103.164.103 80' May 16 00:19:27.362: INFO: stderr: "I0516 00:19:27.275785 2120 log.go:172] (0xc0009194a0) (0xc000bec6e0) Create stream\nI0516 00:19:27.275838 2120 log.go:172] (0xc0009194a0) (0xc000bec6e0) Stream added, broadcasting: 1\nI0516 00:19:27.279404 2120 log.go:172] (0xc0009194a0) Reply frame received for 1\nI0516 00:19:27.279439 2120 log.go:172] (0xc0009194a0) (0xc0004aadc0) Create stream\nI0516 00:19:27.279449 2120 log.go:172] (0xc0009194a0) (0xc0004aadc0) Stream added, broadcasting: 3\nI0516 00:19:27.280216 2120 log.go:172] (0xc0009194a0) Reply frame received for 3\nI0516 00:19:27.280260 2120 log.go:172] (0xc0009194a0) (0xc0003375e0) Create stream\nI0516 00:19:27.280275 2120 log.go:172] (0xc0009194a0) (0xc0003375e0) Stream added, broadcasting: 5\nI0516 00:19:27.281399 2120 log.go:172] (0xc0009194a0) Reply frame received for 5\nI0516 00:19:27.355275 2120 log.go:172] (0xc0009194a0) Data frame received for 3\nI0516 00:19:27.355314 2120 log.go:172] (0xc0004aadc0) (3) Data frame handling\nI0516 00:19:27.355333 2120 log.go:172] (0xc0009194a0) Data frame received for 5\nI0516 00:19:27.355339 2120 log.go:172] (0xc0003375e0) (5) Data frame handling\nI0516 00:19:27.355348 2120 log.go:172] (0xc0003375e0) (5) Data frame sent\nI0516 00:19:27.355354 2120 log.go:172] (0xc0009194a0) Data frame received for 5\nI0516 00:19:27.355372 2120 log.go:172] (0xc0003375e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.164.103 80\nConnection to 10.103.164.103 80 port [tcp/http] succeeded!\nI0516 00:19:27.356939 2120 log.go:172] (0xc0009194a0) Data frame received for 1\nI0516 00:19:27.357010 2120 log.go:172] (0xc000bec6e0) (1) Data frame handling\nI0516 00:19:27.357063 2120 log.go:172] (0xc000bec6e0) (1) Data frame sent\nI0516 00:19:27.357088 2120 log.go:172] (0xc0009194a0) (0xc000bec6e0) Stream removed, broadcasting: 1\nI0516 00:19:27.357104 2120 log.go:172] (0xc0009194a0) Go away received\nI0516 00:19:27.357638 2120 log.go:172] (0xc0009194a0) (0xc000bec6e0) Stream removed, broadcasting: 1\nI0516 00:19:27.357668 2120 log.go:172] (0xc0009194a0) (0xc0004aadc0) Stream removed, broadcasting: 3\nI0516 00:19:27.357678 2120 log.go:172] (0xc0009194a0) (0xc0003375e0) Stream removed, broadcasting: 5\n" May 16 00:19:27.362: INFO: stdout: "" May 16 00:19:27.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5885 execpodj5bmq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30436' May 16 00:19:27.553: INFO: stderr: "I0516 00:19:27.491196 2142 log.go:172] (0xc000979290) (0xc000bca3c0) Create stream\nI0516 00:19:27.491240 2142 log.go:172] (0xc000979290) (0xc000bca3c0) Stream added, broadcasting: 1\nI0516 00:19:27.495659 2142 log.go:172] (0xc000979290) Reply frame received for 1\nI0516 00:19:27.495693 2142 log.go:172] (0xc000979290) (0xc0005f8320) Create stream\nI0516 00:19:27.495702 2142 log.go:172] (0xc000979290) (0xc0005f8320) Stream added, broadcasting: 3\nI0516 00:19:27.496434 2142 log.go:172] (0xc000979290) Reply frame received for 3\nI0516 00:19:27.496464 2142 log.go:172] (0xc000979290) (0xc000502e60) Create stream\nI0516 00:19:27.496473 2142 log.go:172] (0xc000979290) (0xc000502e60) Stream added, broadcasting: 5\nI0516 00:19:27.497101 2142 log.go:172] (0xc000979290) Reply frame received for 5\nI0516 00:19:27.546313 2142 log.go:172] (0xc000979290) Data frame received for 5\nI0516 00:19:27.546349 2142 log.go:172] (0xc000502e60) (5) Data frame handling\nI0516 00:19:27.546365 2142 log.go:172] (0xc000502e60) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30436\nI0516 00:19:27.546722 2142 log.go:172] (0xc000979290) Data frame received for 5\nI0516 00:19:27.546738 2142 log.go:172] (0xc000502e60) (5) Data frame handling\nI0516 00:19:27.546749 2142 log.go:172] (0xc000502e60) (5) Data frame sent\nConnection to 172.17.0.13 30436 port [tcp/30436] succeeded!\nI0516 00:19:27.547027 2142 log.go:172] (0xc000979290) Data frame received for 5\nI0516 00:19:27.547036 2142 log.go:172] (0xc000502e60) (5) Data frame handling\nI0516 00:19:27.547055 2142 log.go:172] (0xc000979290) Data frame received for 3\nI0516 00:19:27.547076 2142 log.go:172] (0xc0005f8320) (3) Data frame handling\nI0516 00:19:27.548780 2142 log.go:172] (0xc000979290) Data frame received for 1\nI0516 00:19:27.548834 2142 log.go:172] (0xc000bca3c0) (1) Data frame handling\nI0516 00:19:27.548860 2142 log.go:172] (0xc000bca3c0) (1) Data frame sent\nI0516 00:19:27.548876 2142 log.go:172] (0xc000979290) (0xc000bca3c0) Stream removed, broadcasting: 1\nI0516 00:19:27.548890 2142 log.go:172] (0xc000979290) Go away received\nI0516 00:19:27.549346 2142 log.go:172] (0xc000979290) (0xc000bca3c0) Stream removed, broadcasting: 1\nI0516 00:19:27.549365 2142 log.go:172] (0xc000979290) (0xc0005f8320) Stream removed, broadcasting: 3\nI0516 00:19:27.549376 2142 log.go:172] (0xc000979290) (0xc000502e60) Stream removed, broadcasting: 5\n" May 16 00:19:27.553: INFO: stdout: "" May 16 00:19:27.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5885 execpodj5bmq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30436' May 16 00:19:27.768: INFO: stderr: "I0516 00:19:27.685366 2162 log.go:172] (0xc00003a6e0) (0xc00050c320) Create stream\nI0516 00:19:27.685426 2162 log.go:172] (0xc00003a6e0) (0xc00050c320) Stream added, broadcasting: 1\nI0516 00:19:27.688196 2162 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0516 00:19:27.688236 2162 log.go:172] (0xc00003a6e0) (0xc0004dcf00) Create stream\nI0516 00:19:27.688248 2162 log.go:172] (0xc00003a6e0) (0xc0004dcf00) Stream added, broadcasting: 3\nI0516 00:19:27.689087 2162 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0516 00:19:27.689235 2162 log.go:172] (0xc00003a6e0) (0xc00050d680) Create stream\nI0516 00:19:27.689247 2162 log.go:172] (0xc00003a6e0) (0xc00050d680) Stream added, broadcasting: 5\nI0516 00:19:27.690202 2162 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0516 00:19:27.761081 2162 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0516 00:19:27.761268 2162 log.go:172] (0xc0004dcf00) (3) Data frame handling\nI0516 00:19:27.761295 2162 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0516 00:19:27.761306 2162 log.go:172] (0xc00050d680) (5) Data frame handling\nI0516 00:19:27.761319 2162 log.go:172] (0xc00050d680) (5) Data frame sent\nI0516 00:19:27.761329 2162 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0516 00:19:27.761335 2162 log.go:172] (0xc00050d680) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30436\nConnection to 172.17.0.12 30436 port [tcp/30436] succeeded!\nI0516 00:19:27.762955 2162 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0516 00:19:27.762978 2162 log.go:172] (0xc00050c320) (1) Data frame handling\nI0516 00:19:27.762985 2162 log.go:172] (0xc00050c320) (1) Data frame sent\nI0516 00:19:27.762997 2162 log.go:172] (0xc00003a6e0) (0xc00050c320) Stream removed, broadcasting: 1\nI0516 00:19:27.763011 2162 log.go:172] (0xc00003a6e0) Go away received\nI0516 00:19:27.763563 2162 log.go:172] (0xc00003a6e0) (0xc00050c320) Stream removed, broadcasting: 1\nI0516 00:19:27.763580 2162 log.go:172] (0xc00003a6e0) (0xc0004dcf00) Stream removed, broadcasting: 3\nI0516 00:19:27.763587 2162 log.go:172] (0xc00003a6e0) (0xc00050d680) Stream removed, broadcasting: 5\n" May 16 00:19:27.768: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:19:27.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5885" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.329 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":128,"skipped":2009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:19:27.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4410.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4410.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4410.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4410.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4410.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4410.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 00:19:36.284: INFO: DNS probes using dns-4410/dns-test-76afc56d-12a3-485d-b0ef-87ea5d5754fa succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:19:36.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4410" for this suite. • [SLOW TEST:8.796 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":129,"skipped":2038,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:19:36.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:21:37.174: INFO: Deleting pod "var-expansion-d510c3ce-d4c3-4439-af53-824480393860" in namespace "var-expansion-1215" May 16 00:21:37.178: INFO: Wait up to 5m0s for pod "var-expansion-d510c3ce-d4c3-4439-af53-824480393860" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:21:47.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1215" for this suite. • [SLOW TEST:130.706 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":130,"skipped":2047,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:21:47.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:21:47.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3506" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":131,"skipped":2075,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:21:47.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-08ace4a9-538f-4865-9852-db8986c7b4f5 STEP: Creating a pod to test consume secrets May 16 00:21:47.913: INFO: Waiting up to 5m0s for pod "pod-secrets-4cc821a5-5392-40c8-b4a9-48af53d6d104" in namespace "secrets-823" to be "Succeeded or Failed" May 16 00:21:47.948: INFO: Pod "pod-secrets-4cc821a5-5392-40c8-b4a9-48af53d6d104": Phase="Pending", Reason="", readiness=false. Elapsed: 34.361967ms May 16 00:21:50.010: INFO: Pod "pod-secrets-4cc821a5-5392-40c8-b4a9-48af53d6d104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096477094s May 16 00:21:52.016: INFO: Pod "pod-secrets-4cc821a5-5392-40c8-b4a9-48af53d6d104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102502022s STEP: Saw pod success May 16 00:21:52.016: INFO: Pod "pod-secrets-4cc821a5-5392-40c8-b4a9-48af53d6d104" satisfied condition "Succeeded or Failed" May 16 00:21:52.020: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-4cc821a5-5392-40c8-b4a9-48af53d6d104 container secret-volume-test: STEP: delete the pod May 16 00:21:52.174: INFO: Waiting for pod pod-secrets-4cc821a5-5392-40c8-b4a9-48af53d6d104 to disappear May 16 00:21:52.191: INFO: Pod pod-secrets-4cc821a5-5392-40c8-b4a9-48af53d6d104 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:21:52.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-823" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2084,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:21:52.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 00:21:52.478: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51c7dfde-0724-4ea4-a362-608c63562fb6" in namespace "projected-1177" to be "Succeeded or Failed" May 16 00:21:52.485: INFO: Pod "downwardapi-volume-51c7dfde-0724-4ea4-a362-608c63562fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.379022ms May 16 00:21:54.563: INFO: Pod "downwardapi-volume-51c7dfde-0724-4ea4-a362-608c63562fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08462799s May 16 00:21:56.567: INFO: Pod "downwardapi-volume-51c7dfde-0724-4ea4-a362-608c63562fb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088972963s STEP: Saw pod success May 16 00:21:56.567: INFO: Pod "downwardapi-volume-51c7dfde-0724-4ea4-a362-608c63562fb6" satisfied condition "Succeeded or Failed" May 16 00:21:56.570: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-51c7dfde-0724-4ea4-a362-608c63562fb6 container client-container: STEP: delete the pod May 16 00:21:56.618: INFO: Waiting for pod downwardapi-volume-51c7dfde-0724-4ea4-a362-608c63562fb6 to disappear May 16 00:21:56.705: INFO: Pod downwardapi-volume-51c7dfde-0724-4ea4-a362-608c63562fb6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:21:56.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1177" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":133,"skipped":2093,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:21:56.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-36b080ce-4c1e-4c30-9ea8-002cecfa70f4 STEP: Creating a pod to test consume secrets May 16 00:21:56.957: INFO: Waiting up to 5m0s for pod "pod-secrets-fc3046f3-7398-48bf-93b1-441b7e27ffc7" in namespace "secrets-2106" to be "Succeeded or Failed" May 16 00:21:57.094: INFO: Pod "pod-secrets-fc3046f3-7398-48bf-93b1-441b7e27ffc7": Phase="Pending", Reason="", readiness=false. Elapsed: 137.1798ms May 16 00:21:59.118: INFO: Pod "pod-secrets-fc3046f3-7398-48bf-93b1-441b7e27ffc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160596703s May 16 00:22:01.379: INFO: Pod "pod-secrets-fc3046f3-7398-48bf-93b1-441b7e27ffc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.42228408s STEP: Saw pod success May 16 00:22:01.379: INFO: Pod "pod-secrets-fc3046f3-7398-48bf-93b1-441b7e27ffc7" satisfied condition "Succeeded or Failed" May 16 00:22:01.385: INFO: Trying to get logs from node latest-worker pod pod-secrets-fc3046f3-7398-48bf-93b1-441b7e27ffc7 container secret-volume-test: STEP: delete the pod May 16 00:22:01.464: INFO: Waiting for pod pod-secrets-fc3046f3-7398-48bf-93b1-441b7e27ffc7 to disappear May 16 00:22:01.555: INFO: Pod pod-secrets-fc3046f3-7398-48bf-93b1-441b7e27ffc7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:22:01.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2106" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":134,"skipped":2186,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:22:01.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 16 00:22:01.874: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:01.878: INFO: Number of nodes with available pods: 0 May 16 00:22:01.878: INFO: Node latest-worker is running more than one daemon pod May 16 00:22:02.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:02.886: INFO: Number of nodes with available pods: 0 May 16 00:22:02.886: INFO: Node latest-worker is running more than one daemon pod May 16 00:22:04.000: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:04.061: INFO: Number of nodes with available pods: 0 May 16 00:22:04.061: INFO: Node latest-worker is running more than one daemon pod May 16 00:22:04.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:04.894: INFO: Number of nodes with available pods: 0 May 16 00:22:04.894: INFO: Node latest-worker is running more than one daemon pod May 16 00:22:05.884: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:05.888: INFO: Number of nodes with available pods: 1 May 16 00:22:05.888: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:22:06.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:06.904: INFO: Number of nodes with available pods: 2 May 16 00:22:06.904: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 16 00:22:06.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:07.009: INFO: Number of nodes with available pods: 1 May 16 00:22:07.009: INFO: Node latest-worker is running more than one daemon pod May 16 00:22:08.015: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:08.019: INFO: Number of nodes with available pods: 1 May 16 00:22:08.019: INFO: Node latest-worker is running more than one daemon pod May 16 00:22:09.145: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:09.148: INFO: Number of nodes with available pods: 1 May 16 00:22:09.148: INFO: Node latest-worker is running more than one daemon pod May 16 00:22:10.015: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:10.019: INFO: Number of nodes with available pods: 1 May 16 00:22:10.019: INFO: Node latest-worker is running more than one daemon pod May 16 00:22:11.015: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:22:11.018: INFO: Number of nodes with available pods: 2 May 16 00:22:11.018: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8907, will wait for the garbage collector to delete the pods May 16 00:22:11.083: INFO: Deleting DaemonSet.extensions daemon-set took: 6.711505ms May 16 00:22:11.383: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.287921ms May 16 00:22:25.286: INFO: Number of nodes with available pods: 0 May 16 00:22:25.286: INFO: Number of running nodes: 0, number of available pods: 0 May 16 00:22:25.288: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8907/daemonsets","resourceVersion":"5010703"},"items":null} May 16 00:22:25.290: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8907/pods","resourceVersion":"5010703"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:22:25.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8907" for this suite. • [SLOW TEST:23.731 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":135,"skipped":2186,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:22:25.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:22:25.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 16 00:22:25.640: INFO: stderr: "" May 16 00:22:25.640: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:22:25.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2630" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":136,"skipped":2201,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:22:25.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 16 00:22:25.940: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4272" to be "Succeeded or Failed" May 16 00:22:25.990: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 50.030657ms May 16 00:22:27.995: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054336873s May 16 00:22:29.999: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058435941s May 16 00:22:32.004: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063506077s STEP: Saw pod success May 16 00:22:32.004: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 16 00:22:32.007: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 16 00:22:32.057: INFO: Waiting for pod pod-host-path-test to disappear May 16 00:22:32.062: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:22:32.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4272" for this suite. • [SLOW TEST:6.421 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2210,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:22:32.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9568 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-9568 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9568 May 16 00:22:32.158: INFO: Found 0 stateful pods, waiting for 1 May 16 00:22:42.162: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 16 00:22:42.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9568 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 00:22:45.495: INFO: stderr: "I0516 00:22:45.378643 2203 log.go:172] (0xc0009dc0b0) (0xc0006a94a0) Create stream\nI0516 00:22:45.378680 2203 log.go:172] (0xc0009dc0b0) (0xc0006a94a0) Stream added, broadcasting: 1\nI0516 00:22:45.381660 2203 log.go:172] (0xc0009dc0b0) Reply frame received for 1\nI0516 00:22:45.381687 2203 log.go:172] (0xc0009dc0b0) (0xc0006a9b80) Create stream\nI0516 00:22:45.381698 2203 log.go:172] (0xc0009dc0b0) (0xc0006a9b80) Stream added, broadcasting: 3\nI0516 00:22:45.382511 2203 log.go:172] (0xc0009dc0b0) Reply frame received for 3\nI0516 00:22:45.382552 2203 log.go:172] (0xc0009dc0b0) (0xc0006cbea0) Create stream\nI0516 00:22:45.382572 2203 log.go:172] (0xc0009dc0b0) (0xc0006cbea0) Stream added, broadcasting: 5\nI0516 00:22:45.383295 2203 log.go:172] (0xc0009dc0b0) Reply frame received for 5\nI0516 00:22:45.448797 2203 log.go:172] (0xc0009dc0b0) Data frame received for 5\nI0516 00:22:45.448834 2203 log.go:172] (0xc0006cbea0) (5) Data frame handling\nI0516 00:22:45.448852 2203 log.go:172] (0xc0006cbea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 00:22:45.486344 2203 log.go:172] (0xc0009dc0b0) Data frame received for 5\nI0516 00:22:45.486373 2203 log.go:172] (0xc0006cbea0) (5) Data frame handling\nI0516 00:22:45.486411 2203 log.go:172] (0xc0009dc0b0) Data frame received for 3\nI0516 00:22:45.486445 2203 log.go:172] (0xc0006a9b80) (3) Data frame handling\nI0516 00:22:45.486481 2203 log.go:172] (0xc0006a9b80) (3) Data frame sent\nI0516 00:22:45.486504 2203 log.go:172] (0xc0009dc0b0) Data frame received for 3\nI0516 00:22:45.486518 2203 log.go:172] (0xc0006a9b80) (3) Data frame handling\nI0516 00:22:45.488651 2203 log.go:172] (0xc0009dc0b0) Data frame received for 1\nI0516 00:22:45.488682 2203 log.go:172] (0xc0006a94a0) (1) Data frame handling\nI0516 00:22:45.488694 2203 log.go:172] (0xc0006a94a0) (1) Data frame sent\nI0516 00:22:45.488711 2203 log.go:172] (0xc0009dc0b0) (0xc0006a94a0) Stream removed, broadcasting: 1\nI0516 00:22:45.488759 2203 log.go:172] (0xc0009dc0b0) Go away received\nI0516 00:22:45.489325 2203 log.go:172] (0xc0009dc0b0) (0xc0006a94a0) Stream removed, broadcasting: 1\nI0516 00:22:45.489346 2203 log.go:172] (0xc0009dc0b0) (0xc0006a9b80) Stream removed, broadcasting: 3\nI0516 00:22:45.489357 2203 log.go:172] (0xc0009dc0b0) (0xc0006cbea0) Stream removed, broadcasting: 5\n" May 16 00:22:45.495: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 00:22:45.495: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 00:22:45.544: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 16 00:22:55.547: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 00:22:55.547: INFO: Waiting for statefulset status.replicas updated to 0 May 16 00:22:55.566: INFO: POD NODE PHASE GRACE CONDITIONS May 16 00:22:55.567: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC }] May 16 00:22:55.567: INFO: May 16 00:22:55.567: INFO: StatefulSet ss has not reached scale 3, at 1 May 16 00:22:56.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989088607s May 16 00:22:57.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978778914s May 16 00:22:58.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964204681s May 16 00:22:59.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.939839471s May 16 00:23:00.683: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.889660285s May 16 00:23:01.701: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.872860978s May 16 00:23:02.705: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.855008964s May 16 00:23:03.710: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.850588113s May 16 00:23:04.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 845.318516ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9568 May 16 00:23:05.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9568 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 00:23:05.933: INFO: stderr: "I0516 00:23:05.850208 2237 log.go:172] (0xc000b0e0b0) (0xc0005292c0) Create stream\nI0516 00:23:05.850267 2237 log.go:172] (0xc000b0e0b0) (0xc0005292c0) Stream added, broadcasting: 1\nI0516 00:23:05.851828 2237 log.go:172] (0xc000b0e0b0) Reply frame received for 1\nI0516 00:23:05.851871 2237 log.go:172] (0xc000b0e0b0) (0xc00024a8c0) Create stream\nI0516 00:23:05.851889 2237 log.go:172] (0xc000b0e0b0) (0xc00024a8c0) Stream added, broadcasting: 3\nI0516 00:23:05.852817 2237 log.go:172] (0xc000b0e0b0) Reply frame received for 3\nI0516 00:23:05.852854 2237 log.go:172] (0xc000b0e0b0) (0xc000529a40) Create stream\nI0516 00:23:05.852865 2237 log.go:172] (0xc000b0e0b0) (0xc000529a40) Stream added, broadcasting: 5\nI0516 00:23:05.854050 2237 log.go:172] (0xc000b0e0b0) Reply frame received for 5\nI0516 00:23:05.924589 2237 log.go:172] (0xc000b0e0b0) Data frame received for 3\nI0516 00:23:05.924627 2237 log.go:172] (0xc00024a8c0) (3) Data frame handling\nI0516 00:23:05.924648 2237 log.go:172] (0xc00024a8c0) (3) Data frame sent\nI0516 00:23:05.924666 2237 log.go:172] (0xc000b0e0b0) Data frame received for 3\nI0516 00:23:05.924684 2237 log.go:172] (0xc00024a8c0) (3) Data frame handling\nI0516 00:23:05.925006 2237 log.go:172] (0xc000b0e0b0) Data frame received for 5\nI0516 00:23:05.925039 2237 log.go:172] (0xc000529a40) (5) Data frame handling\nI0516 00:23:05.925084 2237 log.go:172] (0xc000529a40) (5) Data frame sent\nI0516 00:23:05.925105 2237 log.go:172] (0xc000b0e0b0) Data frame received for 5\nI0516 00:23:05.925342 2237 log.go:172] (0xc000529a40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 00:23:05.927107 2237 log.go:172] (0xc000b0e0b0) Data frame received for 1\nI0516 00:23:05.927139 2237 log.go:172] (0xc0005292c0) (1) Data frame handling\nI0516 00:23:05.927169 2237 log.go:172] (0xc0005292c0) (1) Data frame sent\nI0516 00:23:05.927215 2237 log.go:172] (0xc000b0e0b0) (0xc0005292c0) Stream removed, broadcasting: 1\nI0516 00:23:05.927324 2237 log.go:172] (0xc000b0e0b0) Go away received\nI0516 00:23:05.927827 2237 log.go:172] (0xc000b0e0b0) (0xc0005292c0) Stream removed, broadcasting: 1\nI0516 00:23:05.927849 2237 log.go:172] (0xc000b0e0b0) (0xc00024a8c0) Stream removed, broadcasting: 3\nI0516 00:23:05.927861 2237 log.go:172] (0xc000b0e0b0) (0xc000529a40) Stream removed, broadcasting: 5\n" May 16 00:23:05.934: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 00:23:05.934: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 00:23:05.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9568 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 00:23:06.136: INFO: stderr: "I0516 00:23:06.067208 2257 log.go:172] (0xc000940c60) (0xc0009fe3c0) Create stream\nI0516 00:23:06.067271 2257 log.go:172] (0xc000940c60) (0xc0009fe3c0) Stream added, broadcasting: 1\nI0516 00:23:06.071726 2257 log.go:172] (0xc000940c60) Reply frame received for 1\nI0516 00:23:06.071763 2257 log.go:172] (0xc000940c60) (0xc000844be0) Create stream\nI0516 00:23:06.071787 2257 log.go:172] (0xc000940c60) (0xc000844be0) Stream added, broadcasting: 3\nI0516 00:23:06.072786 2257 log.go:172] (0xc000940c60) Reply frame received for 3\nI0516 00:23:06.072842 2257 log.go:172] (0xc000940c60) (0xc000624c80) Create stream\nI0516 00:23:06.072859 2257 log.go:172] (0xc000940c60) (0xc000624c80) Stream added, broadcasting: 5\nI0516 00:23:06.073912 2257 log.go:172] (0xc000940c60) Reply frame received for 5\nI0516 00:23:06.129814 2257 log.go:172] (0xc000940c60) Data frame received for 5\nI0516 00:23:06.129859 2257 log.go:172] (0xc000624c80) (5) Data frame handling\nI0516 00:23:06.129876 2257 log.go:172] (0xc000624c80) (5) Data frame sent\nI0516 00:23:06.129889 2257 log.go:172] (0xc000940c60) Data frame received for 5\nI0516 00:23:06.129900 2257 log.go:172] (0xc000624c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0516 00:23:06.129928 2257 log.go:172] (0xc000940c60) Data frame received for 3\nI0516 00:23:06.129939 2257 log.go:172] (0xc000844be0) (3) Data frame handling\nI0516 00:23:06.129960 2257 log.go:172] (0xc000844be0) (3) Data frame sent\nI0516 00:23:06.129976 2257 log.go:172] (0xc000940c60) Data frame received for 3\nI0516 00:23:06.129988 2257 log.go:172] (0xc000844be0) (3) Data frame handling\nI0516 00:23:06.131545 2257 log.go:172] (0xc000940c60) Data frame received for 1\nI0516 00:23:06.131577 2257 log.go:172] (0xc0009fe3c0) (1) Data frame handling\nI0516 00:23:06.131604 2257 log.go:172] (0xc0009fe3c0) (1) Data frame sent\nI0516 00:23:06.131677 2257 log.go:172] (0xc000940c60) (0xc0009fe3c0) Stream removed, broadcasting: 1\nI0516 00:23:06.131913 2257 log.go:172] (0xc000940c60) Go away received\nI0516 00:23:06.132034 2257 log.go:172] (0xc000940c60) (0xc0009fe3c0) Stream removed, broadcasting: 1\nI0516 00:23:06.132054 2257 log.go:172] (0xc000940c60) (0xc000844be0) Stream removed, broadcasting: 3\nI0516 00:23:06.132067 2257 log.go:172] (0xc000940c60) (0xc000624c80) Stream removed, broadcasting: 5\n" May 16 00:23:06.137: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 00:23:06.137: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 00:23:06.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9568 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 00:23:06.370: INFO: stderr: "I0516 00:23:06.290199 2278 log.go:172] (0xc000997290) (0xc000aa63c0) Create stream\nI0516 00:23:06.290251 2278 log.go:172] (0xc000997290) (0xc000aa63c0) Stream added, broadcasting: 1\nI0516 00:23:06.295586 2278 log.go:172] (0xc000997290) Reply frame received for 1\nI0516 00:23:06.295634 2278 log.go:172] (0xc000997290) (0xc00068e780) Create stream\nI0516 00:23:06.295653 2278 log.go:172] (0xc000997290) (0xc00068e780) Stream added, broadcasting: 3\nI0516 00:23:06.296567 2278 log.go:172] (0xc000997290) Reply frame received for 3\nI0516 00:23:06.296622 2278 log.go:172] (0xc000997290) (0xc00044c3c0) Create stream\nI0516 00:23:06.296641 2278 log.go:172] (0xc000997290) (0xc00044c3c0) Stream added, broadcasting: 5\nI0516 00:23:06.297789 2278 log.go:172] (0xc000997290) Reply frame received for 5\nI0516 00:23:06.362275 2278 log.go:172] (0xc000997290) Data frame received for 5\nI0516 00:23:06.362318 2278 log.go:172] (0xc00044c3c0) (5) Data frame handling\nI0516 00:23:06.362334 2278 log.go:172] (0xc00044c3c0) (5) Data frame sent\nI0516 00:23:06.362346 2278 log.go:172] (0xc000997290) Data frame received for 5\nI0516 00:23:06.362357 2278 log.go:172] (0xc00044c3c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0516 00:23:06.362389 2278 log.go:172] (0xc000997290) Data frame received for 3\nI0516 00:23:06.362406 2278 log.go:172] (0xc00068e780) (3) Data frame handling\nI0516 00:23:06.362424 2278 log.go:172] (0xc00068e780) (3) Data frame sent\nI0516 00:23:06.362456 2278 log.go:172] (0xc000997290) Data frame received for 3\nI0516 00:23:06.362479 2278 log.go:172] (0xc00068e780) (3) Data frame handling\nI0516 00:23:06.363926 2278 log.go:172] (0xc000997290) Data frame received for 1\nI0516 00:23:06.363957 2278 log.go:172] (0xc000aa63c0) (1) Data frame handling\nI0516 00:23:06.363974 2278 log.go:172] (0xc000aa63c0) (1) Data frame sent\nI0516 00:23:06.363997 2278 log.go:172] (0xc000997290) (0xc000aa63c0) Stream removed, broadcasting: 1\nI0516 00:23:06.364048 2278 log.go:172] (0xc000997290) Go away received\nI0516 00:23:06.364422 2278 log.go:172] (0xc000997290) (0xc000aa63c0) Stream removed, broadcasting: 1\nI0516 00:23:06.364444 2278 log.go:172] (0xc000997290) (0xc00068e780) Stream removed, broadcasting: 3\nI0516 00:23:06.364456 2278 log.go:172] (0xc000997290) (0xc00044c3c0) Stream removed, broadcasting: 5\n" May 16 00:23:06.370: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 00:23:06.370: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 00:23:06.374: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 16 00:23:16.380: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 16 00:23:16.380: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 16 00:23:16.380: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 16 00:23:16.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9568 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 00:23:16.621: INFO: stderr: "I0516 00:23:16.524487 2301 log.go:172] (0xc000b13c30) (0xc0008665a0) Create stream\nI0516 00:23:16.524563 2301 log.go:172] (0xc000b13c30) (0xc0008665a0) Stream added, broadcasting: 1\nI0516 00:23:16.527390 2301 log.go:172] (0xc000b13c30) Reply frame received for 1\nI0516 00:23:16.527441 2301 log.go:172] (0xc000b13c30) (0xc0006654a0) Create stream\nI0516 00:23:16.527464 2301 log.go:172] (0xc000b13c30) (0xc0006654a0) Stream added, broadcasting: 3\nI0516 00:23:16.528415 2301 log.go:172] (0xc000b13c30) Reply frame received for 3\nI0516 00:23:16.528445 2301 log.go:172] (0xc000b13c30) (0xc00086ee60) Create stream\nI0516 00:23:16.528466 2301 log.go:172] (0xc000b13c30) (0xc00086ee60) Stream added, broadcasting: 5\nI0516 00:23:16.530001 2301 log.go:172] (0xc000b13c30) Reply frame received for 5\nI0516 00:23:16.613580 2301 log.go:172] (0xc000b13c30) Data frame received for 5\nI0516 00:23:16.613611 2301 log.go:172] (0xc00086ee60) (5) Data frame handling\nI0516 00:23:16.613625 2301 log.go:172] (0xc00086ee60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 00:23:16.613651 2301 log.go:172] (0xc000b13c30) Data frame received for 3\nI0516 00:23:16.613661 2301 log.go:172] (0xc0006654a0) (3) Data frame handling\nI0516 00:23:16.613671 2301 log.go:172] (0xc0006654a0) (3) Data frame sent\nI0516 00:23:16.613680 2301 log.go:172] (0xc000b13c30) Data frame received for 3\nI0516 00:23:16.613690 2301 log.go:172] (0xc0006654a0) (3) Data frame handling\nI0516 00:23:16.613961 2301 log.go:172] (0xc000b13c30) Data frame received for 5\nI0516 00:23:16.613978 2301 log.go:172] (0xc00086ee60) (5) Data frame handling\nI0516 00:23:16.615764 2301 log.go:172] (0xc000b13c30) Data frame received for 1\nI0516 00:23:16.615787 2301 log.go:172] (0xc0008665a0) (1) Data frame handling\nI0516 00:23:16.615807 2301 log.go:172] (0xc0008665a0) (1) Data frame sent\nI0516 00:23:16.615828 2301 log.go:172] (0xc000b13c30) (0xc0008665a0) Stream removed, broadcasting: 1\nI0516 00:23:16.615854 2301 log.go:172] (0xc000b13c30) Go away received\nI0516 00:23:16.616394 2301 log.go:172] (0xc000b13c30) (0xc0008665a0) Stream removed, broadcasting: 1\nI0516 00:23:16.616415 2301 log.go:172] (0xc000b13c30) (0xc0006654a0) Stream removed, broadcasting: 3\nI0516 00:23:16.616426 2301 log.go:172] (0xc000b13c30) (0xc00086ee60) Stream removed, broadcasting: 5\n" May 16 00:23:16.622: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 00:23:16.622: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 00:23:16.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9568 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 00:23:16.864: INFO: stderr: "I0516 00:23:16.756860 2322 log.go:172] (0xc00003a0b0) (0xc0006945a0) Create stream\nI0516 00:23:16.756923 2322 log.go:172] (0xc00003a0b0) (0xc0006945a0) Stream added, broadcasting: 1\nI0516 00:23:16.758866 2322 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0516 00:23:16.758926 2322 log.go:172] (0xc00003a0b0) (0xc000540280) Create stream\nI0516 00:23:16.758954 2322 log.go:172] (0xc00003a0b0) (0xc000540280) Stream added, broadcasting: 3\nI0516 00:23:16.759993 2322 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0516 00:23:16.760043 2322 log.go:172] (0xc00003a0b0) (0xc000541220) Create stream\nI0516 00:23:16.760061 2322 log.go:172] (0xc00003a0b0) (0xc000541220) Stream added, broadcasting: 5\nI0516 00:23:16.760906 2322 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0516 00:23:16.831222 2322 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0516 00:23:16.831256 2322 log.go:172] (0xc000541220) (5) Data frame handling\nI0516 00:23:16.831292 2322 log.go:172] (0xc000541220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 00:23:16.857928 2322 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0516 00:23:16.857961 2322 log.go:172] (0xc000541220) (5) Data frame handling\nI0516 00:23:16.857984 2322 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0516 00:23:16.857992 2322 log.go:172] (0xc000540280) (3) Data frame handling\nI0516 00:23:16.857998 2322 log.go:172] (0xc000540280) (3) Data frame sent\nI0516 00:23:16.858004 2322 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0516 00:23:16.858008 2322 log.go:172] (0xc000540280) (3) Data frame handling\nI0516 00:23:16.859374 2322 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0516 00:23:16.859400 2322 log.go:172] (0xc0006945a0) (1) Data frame handling\nI0516 00:23:16.859408 2322 log.go:172] (0xc0006945a0) (1) Data frame sent\nI0516 00:23:16.859421 2322 log.go:172] (0xc00003a0b0) (0xc0006945a0) Stream removed, broadcasting: 1\nI0516 00:23:16.859432 2322 log.go:172] (0xc00003a0b0) Go away received\nI0516 00:23:16.859705 2322 log.go:172] (0xc00003a0b0) (0xc0006945a0) Stream removed, broadcasting: 1\nI0516 00:23:16.859716 2322 log.go:172] (0xc00003a0b0) (0xc000540280) Stream removed, broadcasting: 3\nI0516 00:23:16.859721 2322 log.go:172] (0xc00003a0b0) (0xc000541220) Stream removed, broadcasting: 5\n" May 16 00:23:16.864: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 00:23:16.864: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 00:23:16.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9568 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 00:23:17.113: INFO: stderr: "I0516 00:23:16.994463 2345 log.go:172] (0xc00041a0b0) (0xc000130d20) Create stream\nI0516 00:23:16.994513 2345 log.go:172] (0xc00041a0b0) (0xc000130d20) Stream added, broadcasting: 1\nI0516 00:23:16.996478 2345 log.go:172] (0xc00041a0b0) Reply frame received for 1\nI0516 00:23:16.996507 2345 log.go:172] (0xc00041a0b0) (0xc0000dd0e0) Create stream\nI0516 00:23:16.996517 2345 log.go:172] (0xc00041a0b0) (0xc0000dd0e0) Stream added, broadcasting: 3\nI0516 00:23:16.997397 2345 log.go:172] (0xc00041a0b0) Reply frame received for 3\nI0516 00:23:16.997421 2345 log.go:172] (0xc00041a0b0) (0xc0001317c0) Create stream\nI0516 00:23:16.997429 2345 log.go:172] (0xc00041a0b0) (0xc0001317c0) Stream added, broadcasting: 5\nI0516 00:23:16.998147 2345 log.go:172] (0xc00041a0b0) Reply frame received for 5\nI0516 00:23:17.074385 2345 log.go:172] (0xc00041a0b0) Data frame received for 5\nI0516 00:23:17.074407 2345 log.go:172] (0xc0001317c0) (5) Data frame handling\nI0516 00:23:17.074432 2345 log.go:172] (0xc0001317c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 00:23:17.104400 2345 log.go:172] (0xc00041a0b0) Data frame received for 3\nI0516 00:23:17.104486 2345 log.go:172] (0xc0000dd0e0) (3) Data frame handling\nI0516 00:23:17.104519 2345 log.go:172] (0xc0000dd0e0) (3) Data frame sent\nI0516 00:23:17.104535 2345 log.go:172] (0xc00041a0b0) Data frame received for 3\nI0516 00:23:17.104546 2345 log.go:172] (0xc0000dd0e0) (3) Data frame handling\nI0516 00:23:17.104660 2345 log.go:172] (0xc00041a0b0) Data frame received for 5\nI0516 00:23:17.104688 2345 log.go:172] (0xc0001317c0) (5) Data frame handling\nI0516 00:23:17.107293 2345 log.go:172] (0xc00041a0b0) Data frame received for 1\nI0516 00:23:17.107319 2345 log.go:172] (0xc000130d20) (1) Data frame handling\nI0516 00:23:17.107344 2345 log.go:172] (0xc000130d20) (1) Data frame sent\nI0516 00:23:17.107362 2345 log.go:172] (0xc00041a0b0) (0xc000130d20) Stream removed, broadcasting: 1\nI0516 00:23:17.107453 2345 log.go:172] (0xc00041a0b0) Go away received\nI0516 00:23:17.107767 2345 log.go:172] (0xc00041a0b0) (0xc000130d20) Stream removed, broadcasting: 1\nI0516 00:23:17.107787 2345 log.go:172] (0xc00041a0b0) (0xc0000dd0e0) Stream removed, broadcasting: 3\nI0516 00:23:17.107798 2345 log.go:172] (0xc00041a0b0) (0xc0001317c0) Stream removed, broadcasting: 5\n" May 16 00:23:17.114: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 00:23:17.114: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 00:23:17.114: INFO: Waiting for statefulset status.replicas updated to 0 May 16 00:23:17.117: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 16 00:23:27.124: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 00:23:27.124: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 16 00:23:27.124: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 16 00:23:27.141: INFO: POD NODE PHASE GRACE CONDITIONS May 16 00:23:27.141: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC }] May 16 00:23:27.141: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:27.141: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:27.141: INFO: May 16 00:23:27.141: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 00:23:28.151: INFO: POD NODE PHASE GRACE CONDITIONS May 16 00:23:28.151: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC }] May 16 00:23:28.151: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:28.151: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:28.151: INFO: May 16 00:23:28.151: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 00:23:29.311: INFO: POD NODE PHASE GRACE CONDITIONS May 16 00:23:29.311: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC }] May 16 00:23:29.311: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:29.311: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:29.311: INFO: May 16 00:23:29.311: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 00:23:30.341: INFO: POD NODE PHASE GRACE CONDITIONS May 16 00:23:30.341: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC }] May 16 00:23:30.341: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:30.341: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:30.341: INFO: May 16 00:23:30.341: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 00:23:31.347: INFO: POD NODE PHASE GRACE CONDITIONS May 16 00:23:31.347: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC }] May 16 00:23:31.347: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:31.347: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:31.347: INFO: May 16 00:23:31.347: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 00:23:32.353: INFO: POD NODE PHASE GRACE CONDITIONS May 16 00:23:32.353: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC }] May 16 00:23:32.353: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:32.353: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:32.353: INFO: May 16 00:23:32.353: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 00:23:33.359: INFO: POD NODE PHASE GRACE CONDITIONS May 16 00:23:33.359: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC }] May 16 00:23:33.360: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:33.360: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:33.360: INFO: May 16 00:23:33.360: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 00:23:34.364: INFO: POD NODE PHASE GRACE CONDITIONS May 16 00:23:34.364: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:32 +0000 UTC }] May 16 00:23:34.365: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:34.365: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:23:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 00:22:55 +0000 UTC }] May 16 00:23:34.365: INFO: May 16 00:23:34.365: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 00:23:35.369: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.766884366s May 16 00:23:36.373: INFO: Verifying statefulset ss doesn't scale past 0 for another 762.370654ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9568 May 16 00:23:37.424: INFO: Scaling statefulset ss to 0 May 16 00:23:37.431: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 16 00:23:37.433: INFO: Deleting all statefulset in ns statefulset-9568 May 16 00:23:37.435: INFO: Scaling statefulset ss to 0 May 16 00:23:37.442: INFO: Waiting for statefulset status.replicas updated to 0 May 16 00:23:37.444: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:23:37.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9568" for this suite. • [SLOW TEST:65.396 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":138,"skipped":2211,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:23:37.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 16 00:23:37.517: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:23:47.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7153" for this suite. • [SLOW TEST:9.756 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":139,"skipped":2216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:23:47.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 16 00:23:47.295: INFO: Waiting up to 5m0s for pod "var-expansion-c47dff70-1376-42ed-9d23-cd1e6aee8972" in namespace "var-expansion-2045" to be "Succeeded or Failed" May 16 00:23:47.298: INFO: Pod "var-expansion-c47dff70-1376-42ed-9d23-cd1e6aee8972": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090242ms May 16 00:23:49.303: INFO: Pod "var-expansion-c47dff70-1376-42ed-9d23-cd1e6aee8972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008031731s May 16 00:23:51.307: INFO: Pod "var-expansion-c47dff70-1376-42ed-9d23-cd1e6aee8972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012414109s STEP: Saw pod success May 16 00:23:51.307: INFO: Pod "var-expansion-c47dff70-1376-42ed-9d23-cd1e6aee8972" satisfied condition "Succeeded or Failed" May 16 00:23:51.310: INFO: Trying to get logs from node latest-worker pod var-expansion-c47dff70-1376-42ed-9d23-cd1e6aee8972 container dapi-container: STEP: delete the pod May 16 00:23:51.339: INFO: Waiting for pod var-expansion-c47dff70-1376-42ed-9d23-cd1e6aee8972 to disappear May 16 00:23:51.352: INFO: Pod var-expansion-c47dff70-1376-42ed-9d23-cd1e6aee8972 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:23:51.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2045" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":140,"skipped":2264,"failed":0} ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:23:51.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:23:51.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2199" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":141,"skipped":2264,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:23:51.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 16 00:23:52.771: INFO: Pod name wrapped-volume-race-d4439fe7-f6ff-4f21-a83d-e64522c32d0c: Found 0 pods out of 5 May 16 00:23:57.795: INFO: Pod name wrapped-volume-race-d4439fe7-f6ff-4f21-a83d-e64522c32d0c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d4439fe7-f6ff-4f21-a83d-e64522c32d0c in namespace emptydir-wrapper-8178, will wait for the garbage collector to delete the pods May 16 00:24:12.526: INFO: Deleting ReplicationController wrapped-volume-race-d4439fe7-f6ff-4f21-a83d-e64522c32d0c took: 8.502982ms May 16 00:24:12.926: INFO: Terminating ReplicationController wrapped-volume-race-d4439fe7-f6ff-4f21-a83d-e64522c32d0c pods took: 400.217966ms STEP: Creating RC which spawns configmap-volume pods May 16 00:24:25.514: INFO: Pod name wrapped-volume-race-1bd33a98-d9ab-485d-8238-9dde498c6183: Found 0 pods out of 5 May 16 00:24:30.524: INFO: Pod name wrapped-volume-race-1bd33a98-d9ab-485d-8238-9dde498c6183: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1bd33a98-d9ab-485d-8238-9dde498c6183 in namespace emptydir-wrapper-8178, will wait for the garbage collector to delete the pods May 16 00:24:46.605: INFO: Deleting ReplicationController wrapped-volume-race-1bd33a98-d9ab-485d-8238-9dde498c6183 took: 6.909593ms May 16 00:24:47.005: INFO: Terminating ReplicationController wrapped-volume-race-1bd33a98-d9ab-485d-8238-9dde498c6183 pods took: 400.332746ms STEP: Creating RC which spawns configmap-volume pods May 16 00:24:55.374: INFO: Pod name wrapped-volume-race-338664ff-9580-47ea-95f3-311ce3ac1ee6: Found 0 pods out of 5 May 16 00:25:00.398: INFO: Pod name wrapped-volume-race-338664ff-9580-47ea-95f3-311ce3ac1ee6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-338664ff-9580-47ea-95f3-311ce3ac1ee6 in namespace emptydir-wrapper-8178, will wait for the garbage collector to delete the pods May 16 00:25:16.478: INFO: Deleting ReplicationController wrapped-volume-race-338664ff-9580-47ea-95f3-311ce3ac1ee6 took: 6.868887ms May 16 00:25:16.878: INFO: Terminating ReplicationController wrapped-volume-race-338664ff-9580-47ea-95f3-311ce3ac1ee6 pods took: 400.208758ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:25:25.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8178" for this suite. • [SLOW TEST:94.521 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":142,"skipped":2286,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:25:25.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-c8a77f13-482f-480a-b053-4a7807691ed0 STEP: Creating a pod to test consume secrets May 16 00:25:26.035: INFO: Waiting up to 5m0s for pod "pod-secrets-5b668ea6-325e-4ae8-91e5-f9bc09608686" in namespace "secrets-7798" to be "Succeeded or Failed" May 16 00:25:26.046: INFO: Pod "pod-secrets-5b668ea6-325e-4ae8-91e5-f9bc09608686": Phase="Pending", Reason="", readiness=false. Elapsed: 10.908313ms May 16 00:25:28.072: INFO: Pod "pod-secrets-5b668ea6-325e-4ae8-91e5-f9bc09608686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036930702s May 16 00:25:30.167: INFO: Pod "pod-secrets-5b668ea6-325e-4ae8-91e5-f9bc09608686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132438963s STEP: Saw pod success May 16 00:25:30.167: INFO: Pod "pod-secrets-5b668ea6-325e-4ae8-91e5-f9bc09608686" satisfied condition "Succeeded or Failed" May 16 00:25:30.170: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5b668ea6-325e-4ae8-91e5-f9bc09608686 container secret-volume-test: STEP: delete the pod May 16 00:25:30.267: INFO: Waiting for pod pod-secrets-5b668ea6-325e-4ae8-91e5-f9bc09608686 to disappear May 16 00:25:30.377: INFO: Pod pod-secrets-5b668ea6-325e-4ae8-91e5-f9bc09608686 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:25:30.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7798" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:25:30.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-cce995da-c3d8-45d2-8a63-8b87ae54eb07 May 16 00:25:30.504: INFO: Pod name my-hostname-basic-cce995da-c3d8-45d2-8a63-8b87ae54eb07: Found 0 pods out of 1 May 16 00:25:35.519: INFO: Pod name my-hostname-basic-cce995da-c3d8-45d2-8a63-8b87ae54eb07: Found 1 pods out of 1 May 16 00:25:35.519: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-cce995da-c3d8-45d2-8a63-8b87ae54eb07" are running May 16 00:25:35.524: INFO: Pod "my-hostname-basic-cce995da-c3d8-45d2-8a63-8b87ae54eb07-5dqc4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 00:25:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 00:25:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 00:25:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 00:25:30 +0000 UTC Reason: Message:}]) May 16 00:25:35.524: INFO: Trying to dial the pod May 16 00:25:40.536: INFO: Controller my-hostname-basic-cce995da-c3d8-45d2-8a63-8b87ae54eb07: Got expected result from replica 1 [my-hostname-basic-cce995da-c3d8-45d2-8a63-8b87ae54eb07-5dqc4]: "my-hostname-basic-cce995da-c3d8-45d2-8a63-8b87ae54eb07-5dqc4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:25:40.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4435" for this suite. • [SLOW TEST:10.133 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":144,"skipped":2339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:25:40.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:26:40.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-84" for this suite. • [SLOW TEST:60.077 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":145,"skipped":2367,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:26:40.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-b43ab5f6-79ca-4996-86b4-e133e3865f51 in namespace container-probe-2546 May 16 00:26:46.735: INFO: Started pod test-webserver-b43ab5f6-79ca-4996-86b4-e133e3865f51 in namespace container-probe-2546 STEP: checking the pod's current state and verifying that restartCount is present May 16 00:26:46.738: INFO: Initial restart count of pod test-webserver-b43ab5f6-79ca-4996-86b4-e133e3865f51 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:30:47.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2546" for this suite. • [SLOW TEST:247.054 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2379,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:30:47.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:30:48.172: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:30:54.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3946" for this suite. • [SLOW TEST:6.887 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":147,"skipped":2382,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:30:54.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-047c04d6-9c9d-4073-9414-4c0118f414e3 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:30:54.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1421" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":148,"skipped":2395,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:30:54.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:30:55.370: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:30:57.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185855, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185855, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185855, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185855, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:30:59.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185855, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185855, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185855, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185855, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:31:02.618: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:31:03.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7520" for this suite. STEP: Destroying namespace "webhook-7520-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.604 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":149,"skipped":2406,"failed":0} SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:31:03.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:31:03.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-123" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":150,"skipped":2408,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:31:03.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 16 00:31:03.468: INFO: Created pod &Pod{ObjectMeta:{dns-3152 dns-3152 /api/v1/namespaces/dns-3152/pods/dns-3152 b66a1186-fc43-4012-b28b-eaa15ba601be 5013995 0 2020-05-16 00:31:03 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-16 00:31:03 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5qzv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5qzv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5qzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 00:31:03.488: INFO: The status of Pod dns-3152 is Pending, waiting for it to be Running (with Ready = true) May 16 00:31:05.492: INFO: The status of Pod dns-3152 is Pending, waiting for it to be Running (with Ready = true) May 16 00:31:07.492: INFO: The status of Pod dns-3152 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 16 00:31:07.492: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3152 PodName:dns-3152 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:31:07.492: INFO: >>> kubeConfig: /root/.kube/config I0516 00:31:07.523366 7 log.go:172] (0xc00171b340) (0xc001f915e0) Create stream I0516 00:31:07.523387 7 log.go:172] (0xc00171b340) (0xc001f915e0) Stream added, broadcasting: 1 I0516 00:31:07.524849 7 log.go:172] (0xc00171b340) Reply frame received for 1 I0516 00:31:07.524882 7 log.go:172] (0xc00171b340) (0xc001f91680) Create stream I0516 00:31:07.524895 7 log.go:172] (0xc00171b340) (0xc001f91680) Stream added, broadcasting: 3 I0516 00:31:07.525864 7 log.go:172] (0xc00171b340) Reply frame received for 3 I0516 00:31:07.525884 7 log.go:172] (0xc00171b340) (0xc00201e140) Create stream I0516 00:31:07.525894 7 log.go:172] (0xc00171b340) (0xc00201e140) Stream added, broadcasting: 5 I0516 00:31:07.526648 7 log.go:172] (0xc00171b340) Reply frame received for 5 I0516 00:31:07.611453 7 log.go:172] (0xc00171b340) Data frame received for 3 I0516 00:31:07.611515 7 log.go:172] (0xc001f91680) (3) Data frame handling I0516 00:31:07.611533 7 log.go:172] (0xc001f91680) (3) Data frame sent I0516 00:31:07.613848 7 log.go:172] (0xc00171b340) Data frame received for 3 I0516 00:31:07.613885 7 log.go:172] (0xc001f91680) (3) Data frame handling I0516 00:31:07.613911 7 log.go:172] (0xc00171b340) Data frame received for 5 I0516 00:31:07.613925 7 log.go:172] (0xc00201e140) (5) Data frame handling I0516 00:31:07.615665 7 log.go:172] (0xc00171b340) Data frame received for 1 I0516 00:31:07.615679 7 log.go:172] (0xc001f915e0) (1) Data frame handling I0516 00:31:07.615690 7 log.go:172] (0xc001f915e0) (1) Data frame sent I0516 00:31:07.615709 7 log.go:172] (0xc00171b340) (0xc001f915e0) Stream removed, broadcasting: 1 I0516 00:31:07.615736 7 log.go:172] (0xc00171b340) Go away received I0516 00:31:07.615823 7 log.go:172] (0xc00171b340) (0xc001f915e0) Stream removed, broadcasting: 1 I0516 00:31:07.615844 7 log.go:172] (0xc00171b340) (0xc001f91680) Stream removed, broadcasting: 3 I0516 00:31:07.615859 7 log.go:172] (0xc00171b340) (0xc00201e140) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 16 00:31:07.615: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3152 PodName:dns-3152 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:31:07.615: INFO: >>> kubeConfig: /root/.kube/config I0516 00:31:07.653824 7 log.go:172] (0xc002f98370) (0xc001ae6960) Create stream I0516 00:31:07.653850 7 log.go:172] (0xc002f98370) (0xc001ae6960) Stream added, broadcasting: 1 I0516 00:31:07.655280 7 log.go:172] (0xc002f98370) Reply frame received for 1 I0516 00:31:07.655309 7 log.go:172] (0xc002f98370) (0xc0024bad20) Create stream I0516 00:31:07.655321 7 log.go:172] (0xc002f98370) (0xc0024bad20) Stream added, broadcasting: 3 I0516 00:31:07.656227 7 log.go:172] (0xc002f98370) Reply frame received for 3 I0516 00:31:07.656263 7 log.go:172] (0xc002f98370) (0xc00201e320) Create stream I0516 00:31:07.656275 7 log.go:172] (0xc002f98370) (0xc00201e320) Stream added, broadcasting: 5 I0516 00:31:07.657309 7 log.go:172] (0xc002f98370) Reply frame received for 5 I0516 00:31:07.729768 7 log.go:172] (0xc002f98370) Data frame received for 3 I0516 00:31:07.729798 7 log.go:172] (0xc0024bad20) (3) Data frame handling I0516 00:31:07.729817 7 log.go:172] (0xc0024bad20) (3) Data frame sent I0516 00:31:07.732212 7 log.go:172] (0xc002f98370) Data frame received for 5 I0516 00:31:07.732230 7 log.go:172] (0xc00201e320) (5) Data frame handling I0516 00:31:07.732535 7 log.go:172] (0xc002f98370) Data frame received for 3 I0516 00:31:07.732550 7 log.go:172] (0xc0024bad20) (3) Data frame handling I0516 00:31:07.734486 7 log.go:172] (0xc002f98370) Data frame received for 1 I0516 00:31:07.734502 7 log.go:172] (0xc001ae6960) (1) Data frame handling I0516 00:31:07.734511 7 log.go:172] (0xc001ae6960) (1) Data frame sent I0516 00:31:07.734521 7 log.go:172] (0xc002f98370) (0xc001ae6960) Stream removed, broadcasting: 1 I0516 00:31:07.734614 7 log.go:172] (0xc002f98370) Go away received I0516 00:31:07.734643 7 log.go:172] (0xc002f98370) (0xc001ae6960) Stream removed, broadcasting: 1 I0516 00:31:07.734660 7 log.go:172] (0xc002f98370) (0xc0024bad20) Stream removed, broadcasting: 3 I0516 00:31:07.734730 7 log.go:172] (0xc002f98370) (0xc00201e320) Stream removed, broadcasting: 5 May 16 00:31:07.734: INFO: Deleting pod dns-3152... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:31:07.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3152" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":151,"skipped":2410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:31:07.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8ca416a2-e30b-4c9e-a287-f0a9398b160a STEP: Creating a pod to test consume secrets May 16 00:31:07.944: INFO: Waiting up to 5m0s for pod "pod-secrets-02f33282-5070-4df8-be73-f45071d9053c" in namespace "secrets-4690" to be "Succeeded or Failed" May 16 00:31:08.086: INFO: Pod "pod-secrets-02f33282-5070-4df8-be73-f45071d9053c": Phase="Pending", Reason="", readiness=false. Elapsed: 142.084517ms May 16 00:31:10.090: INFO: Pod "pod-secrets-02f33282-5070-4df8-be73-f45071d9053c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146056457s May 16 00:31:12.095: INFO: Pod "pod-secrets-02f33282-5070-4df8-be73-f45071d9053c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15072083s STEP: Saw pod success May 16 00:31:12.095: INFO: Pod "pod-secrets-02f33282-5070-4df8-be73-f45071d9053c" satisfied condition "Succeeded or Failed" May 16 00:31:12.098: INFO: Trying to get logs from node latest-worker pod pod-secrets-02f33282-5070-4df8-be73-f45071d9053c container secret-volume-test: STEP: delete the pod May 16 00:31:12.147: INFO: Waiting for pod pod-secrets-02f33282-5070-4df8-be73-f45071d9053c to disappear May 16 00:31:12.154: INFO: Pod pod-secrets-02f33282-5070-4df8-be73-f45071d9053c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:31:12.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4690" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":152,"skipped":2435,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:31:12.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:31:16.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6342" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":153,"skipped":2442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:31:16.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:31:16.392: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 16 00:31:21.396: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 16 00:31:21.396: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 16 00:31:21.506: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6050 /apis/apps/v1/namespaces/deployment-6050/deployments/test-cleanup-deployment fe4bfd54-da71-46ef-b9bb-64749de723ec 5014167 1 2020-05-16 00:31:21 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-16 00:31:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001e91638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 16 00:31:21.510: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-6050 /apis/apps/v1/namespaces/deployment-6050/replicasets/test-cleanup-deployment-6688745694 cddd09e8-85f4-42bb-9c4f-334b62fbbc75 5014169 1 2020-05-16 00:31:21 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment fe4bfd54-da71-46ef-b9bb-64749de723ec 0xc001e91b07 0xc001e91b08}] [] [{kube-controller-manager Update apps/v1 2020-05-16 00:31:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4bfd54-da71-46ef-b9bb-64749de723ec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001e91b98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 00:31:21.510: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 16 00:31:21.510: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6050 /apis/apps/v1/namespaces/deployment-6050/replicasets/test-cleanup-controller 3f64064e-237b-48a1-9020-1b9b2d5e6d75 5014168 1 2020-05-16 00:31:16 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment fe4bfd54-da71-46ef-b9bb-64749de723ec 0xc001e919df 0xc001e919f0}] [] [{e2e.test Update apps/v1 2020-05-16 00:31:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-16 00:31:21 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"fe4bfd54-da71-46ef-b9bb-64749de723ec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001e91a88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 16 00:31:21.551: INFO: Pod "test-cleanup-controller-zv7t2" is available: &Pod{ObjectMeta:{test-cleanup-controller-zv7t2 test-cleanup-controller- deployment-6050 /api/v1/namespaces/deployment-6050/pods/test-cleanup-controller-zv7t2 66e93a69-6f08-4063-8769-e888870ac44e 5014147 0 2020-05-16 00:31:16 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 3f64064e-237b-48a1-9020-1b9b2d5e6d75 0xc003563037 0xc003563038}] [] [{kube-controller-manager Update v1 2020-05-16 00:31:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3f64064e-237b-48a1-9020-1b9b2d5e6d75\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 00:31:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9xdvh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9xdvh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9xdvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 00:31:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 00:31:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 00:31:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 00:31:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.193,StartTime:2020-05-16 00:31:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 00:31:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://422a7b09e49d908d8bff114a356de97853fcac9afaf4cb5818d2406fb61575ac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 00:31:21.551: INFO: Pod "test-cleanup-deployment-6688745694-9h7k6" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-9h7k6 test-cleanup-deployment-6688745694- deployment-6050 /api/v1/namespaces/deployment-6050/pods/test-cleanup-deployment-6688745694-9h7k6 68741108-bc74-4ff4-b81e-d0d522200248 5014171 0 2020-05-16 00:31:21 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 cddd09e8-85f4-42bb-9c4f-334b62fbbc75 0xc003563217 0xc003563218}] [] [{kube-controller-manager Update v1 2020-05-16 00:31:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddd09e8-85f4-42bb-9c4f-334b62fbbc75\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9xdvh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9xdvh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9xdvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:31:21.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6050" for this suite. • [SLOW TEST:5.365 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":154,"skipped":2473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:31:21.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:31:22.328: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:31:24.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185882, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185882, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:31:26.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185882, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185882, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:31:29.404: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:31:29.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6159" for this suite. STEP: Destroying namespace "webhook-6159-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.959 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":155,"skipped":2505,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:31:29.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 16 00:31:29.687: INFO: Pod name pod-release: Found 0 pods out of 1 May 16 00:31:34.700: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:31:34.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2988" for this suite. • [SLOW TEST:5.276 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":156,"skipped":2532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:31:34.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 00:31:34.983: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92315133-cb6f-4a7f-a18d-ae5c71de9a98" in namespace "downward-api-7518" to be "Succeeded or Failed" May 16 00:31:35.057: INFO: Pod "downwardapi-volume-92315133-cb6f-4a7f-a18d-ae5c71de9a98": Phase="Pending", Reason="", readiness=false. Elapsed: 73.649044ms May 16 00:31:37.060: INFO: Pod "downwardapi-volume-92315133-cb6f-4a7f-a18d-ae5c71de9a98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077590398s May 16 00:31:39.081: INFO: Pod "downwardapi-volume-92315133-cb6f-4a7f-a18d-ae5c71de9a98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098002238s May 16 00:31:41.086: INFO: Pod "downwardapi-volume-92315133-cb6f-4a7f-a18d-ae5c71de9a98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103031432s STEP: Saw pod success May 16 00:31:41.086: INFO: Pod "downwardapi-volume-92315133-cb6f-4a7f-a18d-ae5c71de9a98" satisfied condition "Succeeded or Failed" May 16 00:31:41.090: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-92315133-cb6f-4a7f-a18d-ae5c71de9a98 container client-container: STEP: delete the pod May 16 00:31:41.153: INFO: Waiting for pod downwardapi-volume-92315133-cb6f-4a7f-a18d-ae5c71de9a98 to disappear May 16 00:31:41.167: INFO: Pod downwardapi-volume-92315133-cb6f-4a7f-a18d-ae5c71de9a98 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:31:41.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7518" for this suite. • [SLOW TEST:6.313 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":157,"skipped":2560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:31:41.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0516 00:32:22.195137 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 00:32:22.195: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:32:22.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2243" for this suite. • [SLOW TEST:41.026 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":158,"skipped":2598,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:32:22.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:32:23.088: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:32:25.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185943, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185943, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185943, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185942, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:32:27.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185943, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185943, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185943, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185942, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:32:30.681: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:32:31.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3717" for this suite. STEP: Destroying namespace "webhook-3717-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.875 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":159,"skipped":2605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:32:33.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:32:35.061: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:32:37.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185955, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185955, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185955, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185955, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:32:39.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185955, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185955, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185955, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725185955, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:32:42.734: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:32:52.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2783" for this suite. STEP: Destroying namespace "webhook-2783-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.919 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":160,"skipped":2628,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:32:52.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 16 00:32:53.112: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 00:32:53.130: INFO: Waiting for terminating namespaces to be deleted... May 16 00:32:53.132: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 16 00:32:53.155: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 16 00:32:53.155: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 16 00:32:53.155: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 16 00:32:53.155: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 16 00:32:53.155: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 00:32:53.155: INFO: Container kindnet-cni ready: true, restart count 0 May 16 00:32:53.155: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 00:32:53.155: INFO: Container kube-proxy ready: true, restart count 0 May 16 00:32:53.155: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 16 00:32:53.161: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 16 00:32:53.161: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 16 00:32:53.161: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 16 00:32:53.161: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 16 00:32:53.161: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 00:32:53.161: INFO: Container kindnet-cni ready: true, restart count 0 May 16 00:32:53.161: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 00:32:53.161: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3fa4c51e-9927-4afb-a180-450814853eb7 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3fa4c51e-9927-4afb-a180-450814853eb7 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3fa4c51e-9927-4afb-a180-450814853eb7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:33:03.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9634" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.319 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":161,"skipped":2631,"failed":0} [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:33:03.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 00:33:03.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-850fc0f6-b9fb-4c0f-a18c-7a93ccf32ecd" in namespace "downward-api-4004" to be "Succeeded or Failed" May 16 00:33:03.440: INFO: Pod "downwardapi-volume-850fc0f6-b9fb-4c0f-a18c-7a93ccf32ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306838ms May 16 00:33:05.442: INFO: Pod "downwardapi-volume-850fc0f6-b9fb-4c0f-a18c-7a93ccf32ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006013479s May 16 00:33:07.445: INFO: Pod "downwardapi-volume-850fc0f6-b9fb-4c0f-a18c-7a93ccf32ecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008890761s STEP: Saw pod success May 16 00:33:07.445: INFO: Pod "downwardapi-volume-850fc0f6-b9fb-4c0f-a18c-7a93ccf32ecd" satisfied condition "Succeeded or Failed" May 16 00:33:07.447: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-850fc0f6-b9fb-4c0f-a18c-7a93ccf32ecd container client-container: STEP: delete the pod May 16 00:33:07.560: INFO: Waiting for pod downwardapi-volume-850fc0f6-b9fb-4c0f-a18c-7a93ccf32ecd to disappear May 16 00:33:07.572: INFO: Pod downwardapi-volume-850fc0f6-b9fb-4c0f-a18c-7a93ccf32ecd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:33:07.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4004" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:33:07.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 16 00:33:07.659: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:33:23.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6977" for this suite. • [SLOW TEST:16.044 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":163,"skipped":2676,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:33:23.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 16 00:33:23.849: INFO: Waiting up to 5m0s for pod "pod-0edff9df-8f65-4bb8-af9e-7f718d99e73a" in namespace "emptydir-2449" to be "Succeeded or Failed" May 16 00:33:23.914: INFO: Pod "pod-0edff9df-8f65-4bb8-af9e-7f718d99e73a": Phase="Pending", Reason="", readiness=false. Elapsed: 64.468962ms May 16 00:33:26.040: INFO: Pod "pod-0edff9df-8f65-4bb8-af9e-7f718d99e73a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190156657s May 16 00:33:28.351: INFO: Pod "pod-0edff9df-8f65-4bb8-af9e-7f718d99e73a": Phase="Running", Reason="", readiness=true. Elapsed: 4.501593739s May 16 00:33:30.355: INFO: Pod "pod-0edff9df-8f65-4bb8-af9e-7f718d99e73a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.505915029s STEP: Saw pod success May 16 00:33:30.355: INFO: Pod "pod-0edff9df-8f65-4bb8-af9e-7f718d99e73a" satisfied condition "Succeeded or Failed" May 16 00:33:30.358: INFO: Trying to get logs from node latest-worker2 pod pod-0edff9df-8f65-4bb8-af9e-7f718d99e73a container test-container: STEP: delete the pod May 16 00:33:30.393: INFO: Waiting for pod pod-0edff9df-8f65-4bb8-af9e-7f718d99e73a to disappear May 16 00:33:30.410: INFO: Pod pod-0edff9df-8f65-4bb8-af9e-7f718d99e73a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:33:30.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2449" for this suite. • [SLOW TEST:6.800 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2694,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:33:30.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 16 00:33:30.519: INFO: Waiting up to 5m0s for pod "var-expansion-959c7a42-31e9-4a7f-af2c-feb518e48945" in namespace "var-expansion-4945" to be "Succeeded or Failed" May 16 00:33:30.530: INFO: Pod "var-expansion-959c7a42-31e9-4a7f-af2c-feb518e48945": Phase="Pending", Reason="", readiness=false. Elapsed: 10.324068ms May 16 00:33:32.550: INFO: Pod "var-expansion-959c7a42-31e9-4a7f-af2c-feb518e48945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030222453s May 16 00:33:34.554: INFO: Pod "var-expansion-959c7a42-31e9-4a7f-af2c-feb518e48945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034633525s STEP: Saw pod success May 16 00:33:34.554: INFO: Pod "var-expansion-959c7a42-31e9-4a7f-af2c-feb518e48945" satisfied condition "Succeeded or Failed" May 16 00:33:34.558: INFO: Trying to get logs from node latest-worker2 pod var-expansion-959c7a42-31e9-4a7f-af2c-feb518e48945 container dapi-container: STEP: delete the pod May 16 00:33:34.608: INFO: Waiting for pod var-expansion-959c7a42-31e9-4a7f-af2c-feb518e48945 to disappear May 16 00:33:34.636: INFO: Pod var-expansion-959c7a42-31e9-4a7f-af2c-feb518e48945 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:33:34.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4945" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2708,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:33:34.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 00:33:34.748: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b4cdc7f-3cc7-4774-b0eb-d68ae385396e" in namespace "downward-api-8085" to be "Succeeded or Failed" May 16 00:33:34.818: INFO: Pod "downwardapi-volume-7b4cdc7f-3cc7-4774-b0eb-d68ae385396e": Phase="Pending", Reason="", readiness=false. Elapsed: 70.016629ms May 16 00:33:36.980: INFO: Pod "downwardapi-volume-7b4cdc7f-3cc7-4774-b0eb-d68ae385396e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2317534s May 16 00:33:38.983: INFO: Pod "downwardapi-volume-7b4cdc7f-3cc7-4774-b0eb-d68ae385396e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235292388s STEP: Saw pod success May 16 00:33:38.984: INFO: Pod "downwardapi-volume-7b4cdc7f-3cc7-4774-b0eb-d68ae385396e" satisfied condition "Succeeded or Failed" May 16 00:33:38.986: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7b4cdc7f-3cc7-4774-b0eb-d68ae385396e container client-container: STEP: delete the pod May 16 00:33:39.029: INFO: Waiting for pod downwardapi-volume-7b4cdc7f-3cc7-4774-b0eb-d68ae385396e to disappear May 16 00:33:39.034: INFO: Pod downwardapi-volume-7b4cdc7f-3cc7-4774-b0eb-d68ae385396e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:33:39.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8085" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":166,"skipped":2716,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:33:39.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 16 00:33:43.762: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3093 pod-service-account-63f3fd54-1897-49b2-82a4-b22d953468df -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 16 00:33:46.903: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3093 pod-service-account-63f3fd54-1897-49b2-82a4-b22d953468df -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 16 00:33:47.124: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3093 pod-service-account-63f3fd54-1897-49b2-82a4-b22d953468df -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:33:47.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3093" for this suite. • [SLOW TEST:8.273 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":167,"skipped":2717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:33:47.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:33:47.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-2214" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":168,"skipped":2740,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:33:47.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:33:47.567: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 16 00:33:52.572: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 16 00:33:52.572: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 16 00:33:54.577: INFO: Creating deployment "test-rollover-deployment" May 16 00:33:54.794: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 16 00:33:56.801: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 16 00:33:56.808: INFO: Ensure that both replica sets have 1 created replica May 16 00:33:56.814: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 16 00:33:56.822: INFO: Updating deployment test-rollover-deployment May 16 00:33:56.822: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 16 00:33:58.850: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 16 00:33:58.855: INFO: Make sure deployment "test-rollover-deployment" is complete May 16 00:33:58.859: INFO: all replica sets need to contain the pod-template-hash label May 16 00:33:58.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186037, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:34:00.867: INFO: all replica sets need to contain the pod-template-hash label May 16 00:34:00.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:34:02.865: INFO: all replica sets need to contain the pod-template-hash label May 16 00:34:02.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:34:04.866: INFO: all replica sets need to contain the pod-template-hash label May 16 00:34:04.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:34:06.867: INFO: all replica sets need to contain the pod-template-hash label May 16 00:34:06.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:34:08.868: INFO: all replica sets need to contain the pod-template-hash label May 16 00:34:08.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186034, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:34:10.868: INFO: May 16 00:34:10.868: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 16 00:34:10.875: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7044 /apis/apps/v1/namespaces/deployment-7044/deployments/test-rollover-deployment c28813fb-46c3-4b1e-b52e-6c1f61afa9e7 5015548 2 2020-05-16 00:33:54 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-16 00:33:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-16 00:34:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003368538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-16 00:33:54 +0000 UTC,LastTransitionTime:2020-05-16 00:33:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-16 00:34:10 +0000 UTC,LastTransitionTime:2020-05-16 00:33:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 16 00:34:10.878: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-7044 /apis/apps/v1/namespaces/deployment-7044/replicasets/test-rollover-deployment-7c4fd9c879 7d517e24-4e88-4b97-8a63-f3d18cb23c90 5015537 2 2020-05-16 00:33:56 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c28813fb-46c3-4b1e-b52e-6c1f61afa9e7 0xc0033ae667 0xc0033ae668}] [] [{kube-controller-manager Update apps/v1 2020-05-16 00:34:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c28813fb-46c3-4b1e-b52e-6c1f61afa9e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033ae708 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 16 00:34:10.878: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 16 00:34:10.878: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7044 /apis/apps/v1/namespaces/deployment-7044/replicasets/test-rollover-controller 8b92c2af-020b-4556-ab65-a13f65da4dac 5015547 2 2020-05-16 00:33:47 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c28813fb-46c3-4b1e-b52e-6c1f61afa9e7 0xc0033ae3ff 0xc0033ae410}] [] [{e2e.test Update apps/v1 2020-05-16 00:33:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-16 00:34:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c28813fb-46c3-4b1e-b52e-6c1f61afa9e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0033ae4c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 00:34:10.878: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-7044 /apis/apps/v1/namespaces/deployment-7044/replicasets/test-rollover-deployment-5686c4cfd5 a056c5c9-0eca-4ced-b66f-54e20a607b7a 5015478 2 2020-05-16 00:33:54 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c28813fb-46c3-4b1e-b52e-6c1f61afa9e7 0xc0033ae557 0xc0033ae558}] [] [{kube-controller-manager Update apps/v1 2020-05-16 00:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c28813fb-46c3-4b1e-b52e-6c1f61afa9e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033ae5f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 00:34:10.881: INFO: Pod "test-rollover-deployment-7c4fd9c879-xddc8" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-xddc8 test-rollover-deployment-7c4fd9c879- deployment-7044 /api/v1/namespaces/deployment-7044/pods/test-rollover-deployment-7c4fd9c879-xddc8 4a335fe9-5188-41b6-8e44-b848c172604c 5015495 0 2020-05-16 00:33:57 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 7d517e24-4e88-4b97-8a63-f3d18cb23c90 0xc003368af7 0xc003368af8}] [] [{kube-controller-manager Update v1 2020-05-16 00:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d517e24-4e88-4b97-8a63-f3d18cb23c90\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 00:33:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.209\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2nkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2nkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2nkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 00:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 00:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 00:33:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 00:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.209,StartTime:2020-05-16 00:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 00:33:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://254b17640e127ccc8d9196ae517ca9840bd2bce97d88e4d60b3291815e8a38a5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:34:10.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7044" for this suite. • [SLOW TEST:23.414 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":169,"skipped":2745,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:34:10.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 16 00:34:17.641: INFO: Successfully updated pod "adopt-release-nm4lx" STEP: Checking that the Job readopts the Pod May 16 00:34:17.641: INFO: Waiting up to 15m0s for pod "adopt-release-nm4lx" in namespace "job-4428" to be "adopted" May 16 00:34:17.644: INFO: Pod "adopt-release-nm4lx": Phase="Running", Reason="", readiness=true. Elapsed: 3.307225ms May 16 00:34:19.649: INFO: Pod "adopt-release-nm4lx": Phase="Running", Reason="", readiness=true. Elapsed: 2.007452896s May 16 00:34:19.649: INFO: Pod "adopt-release-nm4lx" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 16 00:34:20.158: INFO: Successfully updated pod "adopt-release-nm4lx" STEP: Checking that the Job releases the Pod May 16 00:34:20.158: INFO: Waiting up to 15m0s for pod "adopt-release-nm4lx" in namespace "job-4428" to be "released" May 16 00:34:20.178: INFO: Pod "adopt-release-nm4lx": Phase="Running", Reason="", readiness=true. Elapsed: 20.076477ms May 16 00:34:22.422: INFO: Pod "adopt-release-nm4lx": Phase="Running", Reason="", readiness=true. Elapsed: 2.263950583s May 16 00:34:22.422: INFO: Pod "adopt-release-nm4lx" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:34:22.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4428" for this suite. • [SLOW TEST:11.973 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":170,"skipped":2767,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:34:22.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-462 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-462 I0516 00:34:23.425062 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-462, replica count: 2 I0516 00:34:26.475651 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 00:34:29.475863 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 00:34:29.475: INFO: Creating new exec pod May 16 00:34:34.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-462 execpodzth5t -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 16 00:34:34.722: INFO: stderr: "I0516 00:34:34.631070 2431 log.go:172] (0xc000b3b4a0) (0xc000b1c500) Create stream\nI0516 00:34:34.631143 2431 log.go:172] (0xc000b3b4a0) (0xc000b1c500) Stream added, broadcasting: 1\nI0516 00:34:34.635648 2431 log.go:172] (0xc000b3b4a0) Reply frame received for 1\nI0516 00:34:34.635700 2431 log.go:172] (0xc000b3b4a0) (0xc00013b680) Create stream\nI0516 00:34:34.635714 2431 log.go:172] (0xc000b3b4a0) (0xc00013b680) Stream added, broadcasting: 3\nI0516 00:34:34.636729 2431 log.go:172] (0xc000b3b4a0) Reply frame received for 3\nI0516 00:34:34.636760 2431 log.go:172] (0xc000b3b4a0) (0xc0005c8140) Create stream\nI0516 00:34:34.636771 2431 log.go:172] (0xc000b3b4a0) (0xc0005c8140) Stream added, broadcasting: 5\nI0516 00:34:34.637886 2431 log.go:172] (0xc000b3b4a0) Reply frame received for 5\nI0516 00:34:34.714026 2431 log.go:172] (0xc000b3b4a0) Data frame received for 5\nI0516 00:34:34.714071 2431 log.go:172] (0xc0005c8140) (5) Data frame handling\nI0516 00:34:34.714097 2431 log.go:172] (0xc0005c8140) (5) Data frame sent\nI0516 00:34:34.714126 2431 log.go:172] (0xc000b3b4a0) Data frame received for 5\nI0516 00:34:34.714230 2431 log.go:172] (0xc0005c8140) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0516 00:34:34.714276 2431 log.go:172] (0xc0005c8140) (5) Data frame sent\nI0516 00:34:34.714705 2431 log.go:172] (0xc000b3b4a0) Data frame received for 5\nI0516 00:34:34.714742 2431 log.go:172] (0xc0005c8140) (5) Data frame handling\nI0516 00:34:34.714797 2431 log.go:172] (0xc000b3b4a0) Data frame received for 3\nI0516 00:34:34.714834 2431 log.go:172] (0xc00013b680) (3) Data frame handling\nI0516 00:34:34.716601 2431 log.go:172] (0xc000b3b4a0) Data frame received for 1\nI0516 00:34:34.716633 2431 log.go:172] (0xc000b1c500) (1) Data frame handling\nI0516 00:34:34.716657 2431 log.go:172] (0xc000b1c500) (1) Data frame sent\nI0516 00:34:34.716684 2431 log.go:172] (0xc000b3b4a0) (0xc000b1c500) Stream removed, broadcasting: 1\nI0516 00:34:34.716709 2431 log.go:172] (0xc000b3b4a0) Go away received\nI0516 00:34:34.717347 2431 log.go:172] (0xc000b3b4a0) (0xc000b1c500) Stream removed, broadcasting: 1\nI0516 00:34:34.717375 2431 log.go:172] (0xc000b3b4a0) (0xc00013b680) Stream removed, broadcasting: 3\nI0516 00:34:34.717386 2431 log.go:172] (0xc000b3b4a0) (0xc0005c8140) Stream removed, broadcasting: 5\n" May 16 00:34:34.723: INFO: stdout: "" May 16 00:34:34.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-462 execpodzth5t -- /bin/sh -x -c nc -zv -t -w 2 10.109.221.57 80' May 16 00:34:34.959: INFO: stderr: "I0516 00:34:34.862075 2451 log.go:172] (0xc000aa22c0) (0xc0006cdcc0) Create stream\nI0516 00:34:34.862124 2451 log.go:172] (0xc000aa22c0) (0xc0006cdcc0) Stream added, broadcasting: 1\nI0516 00:34:34.864170 2451 log.go:172] (0xc000aa22c0) Reply frame received for 1\nI0516 00:34:34.864209 2451 log.go:172] (0xc000aa22c0) (0xc0005057c0) Create stream\nI0516 00:34:34.864221 2451 log.go:172] (0xc000aa22c0) (0xc0005057c0) Stream added, broadcasting: 3\nI0516 00:34:34.865051 2451 log.go:172] (0xc000aa22c0) Reply frame received for 3\nI0516 00:34:34.865093 2451 log.go:172] (0xc000aa22c0) (0xc000505a40) Create stream\nI0516 00:34:34.865272 2451 log.go:172] (0xc000aa22c0) (0xc000505a40) Stream added, broadcasting: 5\nI0516 00:34:34.866007 2451 log.go:172] (0xc000aa22c0) Reply frame received for 5\nI0516 00:34:34.952546 2451 log.go:172] (0xc000aa22c0) Data frame received for 3\nI0516 00:34:34.952581 2451 log.go:172] (0xc0005057c0) (3) Data frame handling\nI0516 00:34:34.952601 2451 log.go:172] (0xc000aa22c0) Data frame received for 5\nI0516 00:34:34.952609 2451 log.go:172] (0xc000505a40) (5) Data frame handling\nI0516 00:34:34.952618 2451 log.go:172] (0xc000505a40) (5) Data frame sent\nI0516 00:34:34.952626 2451 log.go:172] (0xc000aa22c0) Data frame received for 5\nI0516 00:34:34.952638 2451 log.go:172] (0xc000505a40) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.221.57 80\nConnection to 10.109.221.57 80 port [tcp/http] succeeded!\nI0516 00:34:34.954227 2451 log.go:172] (0xc000aa22c0) Data frame received for 1\nI0516 00:34:34.954276 2451 log.go:172] (0xc0006cdcc0) (1) Data frame handling\nI0516 00:34:34.954320 2451 log.go:172] (0xc0006cdcc0) (1) Data frame sent\nI0516 00:34:34.954380 2451 log.go:172] (0xc000aa22c0) (0xc0006cdcc0) Stream removed, broadcasting: 1\nI0516 00:34:34.954532 2451 log.go:172] (0xc000aa22c0) Go away received\nI0516 00:34:34.954992 2451 log.go:172] (0xc000aa22c0) (0xc0006cdcc0) Stream removed, broadcasting: 1\nI0516 00:34:34.955061 2451 log.go:172] (0xc000aa22c0) (0xc0005057c0) Stream removed, broadcasting: 3\nI0516 00:34:34.955334 2451 log.go:172] (0xc000aa22c0) (0xc000505a40) Stream removed, broadcasting: 5\n" May 16 00:34:34.960: INFO: stdout: "" May 16 00:34:34.960: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:34:34.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-462" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.128 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":171,"skipped":2792,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:34:34.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 16 00:34:35.091: INFO: Waiting up to 5m0s for pod "downward-api-252c415c-3e60-4ed9-b68b-30bc83a45883" in namespace "downward-api-687" to be "Succeeded or Failed" May 16 00:34:35.113: INFO: Pod "downward-api-252c415c-3e60-4ed9-b68b-30bc83a45883": Phase="Pending", Reason="", readiness=false. Elapsed: 22.004095ms May 16 00:34:37.117: INFO: Pod "downward-api-252c415c-3e60-4ed9-b68b-30bc83a45883": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025754298s May 16 00:34:39.126: INFO: Pod "downward-api-252c415c-3e60-4ed9-b68b-30bc83a45883": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034348062s STEP: Saw pod success May 16 00:34:39.126: INFO: Pod "downward-api-252c415c-3e60-4ed9-b68b-30bc83a45883" satisfied condition "Succeeded or Failed" May 16 00:34:39.128: INFO: Trying to get logs from node latest-worker2 pod downward-api-252c415c-3e60-4ed9-b68b-30bc83a45883 container dapi-container: STEP: delete the pod May 16 00:34:39.176: INFO: Waiting for pod downward-api-252c415c-3e60-4ed9-b68b-30bc83a45883 to disappear May 16 00:34:39.185: INFO: Pod downward-api-252c415c-3e60-4ed9-b68b-30bc83a45883 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:34:39.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-687" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":2805,"failed":0} ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:34:39.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-32817855-d4f0-4044-bc33-c386746ace9f STEP: Creating configMap with name cm-test-opt-upd-986e932a-f287-459c-bd39-e2bfacea68cc STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-32817855-d4f0-4044-bc33-c386746ace9f STEP: Updating configmap cm-test-opt-upd-986e932a-f287-459c-bd39-e2bfacea68cc STEP: Creating configMap with name cm-test-opt-create-c5b584d3-3db6-4779-9f20-946433a01ed9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:36:10.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5799" for this suite. • [SLOW TEST:91.037 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":2805,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:36:10.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:36:10.290: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3336' May 16 00:36:10.569: INFO: stderr: "" May 16 00:36:10.569: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 16 00:36:10.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3336' May 16 00:36:10.919: INFO: stderr: "" May 16 00:36:10.919: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 16 00:36:11.926: INFO: Selector matched 1 pods for map[app:agnhost] May 16 00:36:11.927: INFO: Found 0 / 1 May 16 00:36:13.082: INFO: Selector matched 1 pods for map[app:agnhost] May 16 00:36:13.082: INFO: Found 0 / 1 May 16 00:36:13.923: INFO: Selector matched 1 pods for map[app:agnhost] May 16 00:36:13.924: INFO: Found 0 / 1 May 16 00:36:14.923: INFO: Selector matched 1 pods for map[app:agnhost] May 16 00:36:14.923: INFO: Found 1 / 1 May 16 00:36:14.923: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 16 00:36:14.926: INFO: Selector matched 1 pods for map[app:agnhost] May 16 00:36:14.926: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 16 00:36:14.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-8t4x9 --namespace=kubectl-3336' May 16 00:36:15.042: INFO: stderr: "" May 16 00:36:15.042: INFO: stdout: "Name: agnhost-master-8t4x9\nNamespace: kubectl-3336\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Sat, 16 May 2020 00:36:10 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.152\nIPs:\n IP: 10.244.1.152\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://573030d254d59e0026bcd9963076f426b6bcc1b5ec31b24fb4aaf56f98261258\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 16 May 2020 00:36:13 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-vvt2w (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-vvt2w:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-vvt2w\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-3336/agnhost-master-8t4x9 to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 2s kubelet, latest-worker Started container agnhost-master\n" May 16 00:36:15.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3336' May 16 00:36:15.160: INFO: stderr: "" May 16 00:36:15.160: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3336\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-8t4x9\n" May 16 00:36:15.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3336' May 16 00:36:15.267: INFO: stderr: "" May 16 00:36:15.267: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3336\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.99.142.222\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.152:6379\nSession Affinity: None\nEvents: \n" May 16 00:36:15.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 16 00:36:15.599: INFO: stderr: "" May 16 00:36:15.599: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sat, 16 May 2020 00:36:09 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 16 May 2020 00:31:31 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 16 May 2020 00:31:31 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 16 May 2020 00:31:31 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 16 May 2020 00:31:31 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 16d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 16d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 16d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 16 00:36:15.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-3336' May 16 00:36:15.921: INFO: stderr: "" May 16 00:36:15.921: INFO: stdout: "Name: kubectl-3336\nLabels: e2e-framework=kubectl\n e2e-run=499df6d0-68de-42c3-ab3a-1c8bd4fa8149\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:36:15.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3336" for this suite. • [SLOW TEST:5.698 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":174,"skipped":2810,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:36:15.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:36:16.204: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:36:16.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7466" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":175,"skipped":2827,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:36:17.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8402 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8402 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8402 May 16 00:36:17.274: INFO: Found 0 stateful pods, waiting for 1 May 16 00:36:27.279: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 16 00:36:27.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8402 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 00:36:27.561: INFO: stderr: "I0516 00:36:27.415491 2614 log.go:172] (0xc000a59a20) (0xc00066c5a0) Create stream\nI0516 00:36:27.415536 2614 log.go:172] (0xc000a59a20) (0xc00066c5a0) Stream added, broadcasting: 1\nI0516 00:36:27.417697 2614 log.go:172] (0xc000a59a20) Reply frame received for 1\nI0516 00:36:27.417733 2614 log.go:172] (0xc000a59a20) (0xc0003ad9a0) Create stream\nI0516 00:36:27.417742 2614 log.go:172] (0xc000a59a20) (0xc0003ad9a0) Stream added, broadcasting: 3\nI0516 00:36:27.418601 2614 log.go:172] (0xc000a59a20) Reply frame received for 3\nI0516 00:36:27.418625 2614 log.go:172] (0xc000a59a20) (0xc00066cf00) Create stream\nI0516 00:36:27.418632 2614 log.go:172] (0xc000a59a20) (0xc00066cf00) Stream added, broadcasting: 5\nI0516 00:36:27.419434 2614 log.go:172] (0xc000a59a20) Reply frame received for 5\nI0516 00:36:27.518818 2614 log.go:172] (0xc000a59a20) Data frame received for 5\nI0516 00:36:27.518835 2614 log.go:172] (0xc00066cf00) (5) Data frame handling\nI0516 00:36:27.518843 2614 log.go:172] (0xc00066cf00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 00:36:27.554083 2614 log.go:172] (0xc000a59a20) Data frame received for 3\nI0516 00:36:27.554176 2614 log.go:172] (0xc0003ad9a0) (3) Data frame handling\nI0516 00:36:27.554203 2614 log.go:172] (0xc0003ad9a0) (3) Data frame sent\nI0516 00:36:27.554213 2614 log.go:172] (0xc000a59a20) Data frame received for 3\nI0516 00:36:27.554220 2614 log.go:172] (0xc0003ad9a0) (3) Data frame handling\nI0516 00:36:27.554442 2614 log.go:172] (0xc000a59a20) Data frame received for 5\nI0516 00:36:27.554462 2614 log.go:172] (0xc00066cf00) (5) Data frame handling\nI0516 00:36:27.556085 2614 log.go:172] (0xc000a59a20) Data frame received for 1\nI0516 00:36:27.556133 2614 log.go:172] (0xc00066c5a0) (1) Data frame handling\nI0516 00:36:27.556169 2614 log.go:172] (0xc00066c5a0) (1) Data frame sent\nI0516 00:36:27.556183 2614 log.go:172] (0xc000a59a20) (0xc00066c5a0) Stream removed, broadcasting: 1\nI0516 00:36:27.556191 2614 log.go:172] (0xc000a59a20) Go away received\nI0516 00:36:27.556725 2614 log.go:172] (0xc000a59a20) (0xc00066c5a0) Stream removed, broadcasting: 1\nI0516 00:36:27.556758 2614 log.go:172] (0xc000a59a20) (0xc0003ad9a0) Stream removed, broadcasting: 3\nI0516 00:36:27.556780 2614 log.go:172] (0xc000a59a20) (0xc00066cf00) Stream removed, broadcasting: 5\n" May 16 00:36:27.561: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 00:36:27.561: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 00:36:27.564: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 16 00:36:37.588: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 00:36:37.588: INFO: Waiting for statefulset status.replicas updated to 0 May 16 00:36:37.646: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999364s May 16 00:36:38.670: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.977873078s May 16 00:36:39.675: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.954177285s May 16 00:36:40.680: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.948948005s May 16 00:36:41.685: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.944213801s May 16 00:36:42.690: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.938631315s May 16 00:36:43.699: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.933686539s May 16 00:36:44.707: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.922247873s May 16 00:36:45.712: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.917137632s May 16 00:36:46.717: INFO: Verifying statefulset ss doesn't scale past 1 for another 911.609133ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8402 May 16 00:36:47.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8402 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 00:36:47.926: INFO: stderr: "I0516 00:36:47.849527 2631 log.go:172] (0xc0009e1ef0) (0xc000b0e0a0) Create stream\nI0516 00:36:47.849571 2631 log.go:172] (0xc0009e1ef0) (0xc000b0e0a0) Stream added, broadcasting: 1\nI0516 00:36:47.852662 2631 log.go:172] (0xc0009e1ef0) Reply frame received for 1\nI0516 00:36:47.852684 2631 log.go:172] (0xc0009e1ef0) (0xc000848000) Create stream\nI0516 00:36:47.852694 2631 log.go:172] (0xc0009e1ef0) (0xc000848000) Stream added, broadcasting: 3\nI0516 00:36:47.853921 2631 log.go:172] (0xc0009e1ef0) Reply frame received for 3\nI0516 00:36:47.853961 2631 log.go:172] (0xc0009e1ef0) (0xc00081a640) Create stream\nI0516 00:36:47.853978 2631 log.go:172] (0xc0009e1ef0) (0xc00081a640) Stream added, broadcasting: 5\nI0516 00:36:47.854792 2631 log.go:172] (0xc0009e1ef0) Reply frame received for 5\nI0516 00:36:47.919914 2631 log.go:172] (0xc0009e1ef0) Data frame received for 5\nI0516 00:36:47.919934 2631 log.go:172] (0xc00081a640) (5) Data frame handling\nI0516 00:36:47.919944 2631 log.go:172] (0xc00081a640) (5) Data frame sent\nI0516 00:36:47.919951 2631 log.go:172] (0xc0009e1ef0) Data frame received for 5\nI0516 00:36:47.919957 2631 log.go:172] (0xc00081a640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 00:36:47.919975 2631 log.go:172] (0xc0009e1ef0) Data frame received for 3\nI0516 00:36:47.919982 2631 log.go:172] (0xc000848000) (3) Data frame handling\nI0516 00:36:47.919990 2631 log.go:172] (0xc000848000) (3) Data frame sent\nI0516 00:36:47.920042 2631 log.go:172] (0xc0009e1ef0) Data frame received for 3\nI0516 00:36:47.920050 2631 log.go:172] (0xc000848000) (3) Data frame handling\nI0516 00:36:47.921038 2631 log.go:172] (0xc0009e1ef0) Data frame received for 1\nI0516 00:36:47.921051 2631 log.go:172] (0xc000b0e0a0) (1) Data frame handling\nI0516 00:36:47.921066 2631 log.go:172] (0xc000b0e0a0) (1) Data frame sent\nI0516 00:36:47.921082 2631 log.go:172] (0xc0009e1ef0) (0xc000b0e0a0) Stream removed, broadcasting: 1\nI0516 00:36:47.921344 2631 log.go:172] (0xc0009e1ef0) Go away received\nI0516 00:36:47.921383 2631 log.go:172] (0xc0009e1ef0) (0xc000b0e0a0) Stream removed, broadcasting: 1\nI0516 00:36:47.921397 2631 log.go:172] (0xc0009e1ef0) (0xc000848000) Stream removed, broadcasting: 3\nI0516 00:36:47.921406 2631 log.go:172] (0xc0009e1ef0) (0xc00081a640) Stream removed, broadcasting: 5\n" May 16 00:36:47.926: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 00:36:47.926: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 00:36:47.929: INFO: Found 1 stateful pods, waiting for 3 May 16 00:36:57.934: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 16 00:36:57.934: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 16 00:36:57.934: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 16 00:36:57.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8402 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 00:36:58.152: INFO: stderr: "I0516 00:36:58.079963 2651 log.go:172] (0xc000a4d130) (0xc00052d360) Create stream\nI0516 00:36:58.080026 2651 log.go:172] (0xc000a4d130) (0xc00052d360) Stream added, broadcasting: 1\nI0516 00:36:58.082461 2651 log.go:172] (0xc000a4d130) Reply frame received for 1\nI0516 00:36:58.082498 2651 log.go:172] (0xc000a4d130) (0xc0005565a0) Create stream\nI0516 00:36:58.082510 2651 log.go:172] (0xc000a4d130) (0xc0005565a0) Stream added, broadcasting: 3\nI0516 00:36:58.083446 2651 log.go:172] (0xc000a4d130) Reply frame received for 3\nI0516 00:36:58.083486 2651 log.go:172] (0xc000a4d130) (0xc00052dcc0) Create stream\nI0516 00:36:58.083500 2651 log.go:172] (0xc000a4d130) (0xc00052dcc0) Stream added, broadcasting: 5\nI0516 00:36:58.084529 2651 log.go:172] (0xc000a4d130) Reply frame received for 5\nI0516 00:36:58.144613 2651 log.go:172] (0xc000a4d130) Data frame received for 5\nI0516 00:36:58.144661 2651 log.go:172] (0xc00052dcc0) (5) Data frame handling\nI0516 00:36:58.144696 2651 log.go:172] (0xc00052dcc0) (5) Data frame sent\nI0516 00:36:58.144731 2651 log.go:172] (0xc000a4d130) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 00:36:58.144756 2651 log.go:172] (0xc00052dcc0) (5) Data frame handling\nI0516 00:36:58.144778 2651 log.go:172] (0xc000a4d130) Data frame received for 3\nI0516 00:36:58.144807 2651 log.go:172] (0xc0005565a0) (3) Data frame handling\nI0516 00:36:58.144825 2651 log.go:172] (0xc0005565a0) (3) Data frame sent\nI0516 00:36:58.144837 2651 log.go:172] (0xc000a4d130) Data frame received for 3\nI0516 00:36:58.144847 2651 log.go:172] (0xc0005565a0) (3) Data frame handling\nI0516 00:36:58.146715 2651 log.go:172] (0xc000a4d130) Data frame received for 1\nI0516 00:36:58.146758 2651 log.go:172] (0xc00052d360) (1) Data frame handling\nI0516 00:36:58.146785 2651 log.go:172] (0xc00052d360) (1) Data frame sent\nI0516 00:36:58.146835 2651 log.go:172] (0xc000a4d130) (0xc00052d360) Stream removed, broadcasting: 1\nI0516 00:36:58.146867 2651 log.go:172] (0xc000a4d130) Go away received\nI0516 00:36:58.147395 2651 log.go:172] (0xc000a4d130) (0xc00052d360) Stream removed, broadcasting: 1\nI0516 00:36:58.147417 2651 log.go:172] (0xc000a4d130) (0xc0005565a0) Stream removed, broadcasting: 3\nI0516 00:36:58.147437 2651 log.go:172] (0xc000a4d130) (0xc00052dcc0) Stream removed, broadcasting: 5\n" May 16 00:36:58.152: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 00:36:58.152: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 00:36:58.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8402 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 00:36:58.376: INFO: stderr: "I0516 00:36:58.289538 2670 log.go:172] (0xc000a8d130) (0xc0006eaa00) Create stream\nI0516 00:36:58.289597 2670 log.go:172] (0xc000a8d130) (0xc0006eaa00) Stream added, broadcasting: 1\nI0516 00:36:58.300087 2670 log.go:172] (0xc000a8d130) Reply frame received for 1\nI0516 00:36:58.300119 2670 log.go:172] (0xc000a8d130) (0xc00061ac80) Create stream\nI0516 00:36:58.300127 2670 log.go:172] (0xc000a8d130) (0xc00061ac80) Stream added, broadcasting: 3\nI0516 00:36:58.300904 2670 log.go:172] (0xc000a8d130) Reply frame received for 3\nI0516 00:36:58.300933 2670 log.go:172] (0xc000a8d130) (0xc00059e500) Create stream\nI0516 00:36:58.300944 2670 log.go:172] (0xc000a8d130) (0xc00059e500) Stream added, broadcasting: 5\nI0516 00:36:58.301966 2670 log.go:172] (0xc000a8d130) Reply frame received for 5\nI0516 00:36:58.354725 2670 log.go:172] (0xc000a8d130) Data frame received for 5\nI0516 00:36:58.354752 2670 log.go:172] (0xc00059e500) (5) Data frame handling\nI0516 00:36:58.354771 2670 log.go:172] (0xc00059e500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 00:36:58.369645 2670 log.go:172] (0xc000a8d130) Data frame received for 3\nI0516 00:36:58.369666 2670 log.go:172] (0xc00061ac80) (3) Data frame handling\nI0516 00:36:58.369676 2670 log.go:172] (0xc00061ac80) (3) Data frame sent\nI0516 00:36:58.369736 2670 log.go:172] (0xc000a8d130) Data frame received for 3\nI0516 00:36:58.369787 2670 log.go:172] (0xc00061ac80) (3) Data frame handling\nI0516 00:36:58.369805 2670 log.go:172] (0xc000a8d130) Data frame received for 5\nI0516 00:36:58.369839 2670 log.go:172] (0xc00059e500) (5) Data frame handling\nI0516 00:36:58.371251 2670 log.go:172] (0xc000a8d130) Data frame received for 1\nI0516 00:36:58.371263 2670 log.go:172] (0xc0006eaa00) (1) Data frame handling\nI0516 00:36:58.371270 2670 log.go:172] (0xc0006eaa00) (1) Data frame sent\nI0516 00:36:58.371445 2670 log.go:172] (0xc000a8d130) (0xc0006eaa00) Stream removed, broadcasting: 1\nI0516 00:36:58.371670 2670 log.go:172] (0xc000a8d130) Go away received\nI0516 00:36:58.371814 2670 log.go:172] (0xc000a8d130) (0xc0006eaa00) Stream removed, broadcasting: 1\nI0516 00:36:58.371838 2670 log.go:172] (0xc000a8d130) (0xc00061ac80) Stream removed, broadcasting: 3\nI0516 00:36:58.371854 2670 log.go:172] (0xc000a8d130) (0xc00059e500) Stream removed, broadcasting: 5\n" May 16 00:36:58.376: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 00:36:58.376: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 00:36:58.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8402 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 00:36:58.616: INFO: stderr: "I0516 00:36:58.510010 2690 log.go:172] (0xc0009cd130) (0xc000675e00) Create stream\nI0516 00:36:58.510087 2690 log.go:172] (0xc0009cd130) (0xc000675e00) Stream added, broadcasting: 1\nI0516 00:36:58.514858 2690 log.go:172] (0xc0009cd130) Reply frame received for 1\nI0516 00:36:58.514884 2690 log.go:172] (0xc0009cd130) (0xc000664fa0) Create stream\nI0516 00:36:58.514892 2690 log.go:172] (0xc0009cd130) (0xc000664fa0) Stream added, broadcasting: 3\nI0516 00:36:58.515563 2690 log.go:172] (0xc0009cd130) Reply frame received for 3\nI0516 00:36:58.515581 2690 log.go:172] (0xc0009cd130) (0xc0005c03c0) Create stream\nI0516 00:36:58.515588 2690 log.go:172] (0xc0009cd130) (0xc0005c03c0) Stream added, broadcasting: 5\nI0516 00:36:58.516372 2690 log.go:172] (0xc0009cd130) Reply frame received for 5\nI0516 00:36:58.581888 2690 log.go:172] (0xc0009cd130) Data frame received for 5\nI0516 00:36:58.581935 2690 log.go:172] (0xc0005c03c0) (5) Data frame handling\nI0516 00:36:58.581969 2690 log.go:172] (0xc0005c03c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 00:36:58.610573 2690 log.go:172] (0xc0009cd130) Data frame received for 3\nI0516 00:36:58.610594 2690 log.go:172] (0xc000664fa0) (3) Data frame handling\nI0516 00:36:58.610608 2690 log.go:172] (0xc000664fa0) (3) Data frame sent\nI0516 00:36:58.610624 2690 log.go:172] (0xc0009cd130) Data frame received for 3\nI0516 00:36:58.610641 2690 log.go:172] (0xc000664fa0) (3) Data frame handling\nI0516 00:36:58.610804 2690 log.go:172] (0xc0009cd130) Data frame received for 5\nI0516 00:36:58.610834 2690 log.go:172] (0xc0005c03c0) (5) Data frame handling\nI0516 00:36:58.612509 2690 log.go:172] (0xc0009cd130) Data frame received for 1\nI0516 00:36:58.612534 2690 log.go:172] (0xc000675e00) (1) Data frame handling\nI0516 00:36:58.612551 2690 log.go:172] (0xc000675e00) (1) Data frame sent\nI0516 00:36:58.612562 2690 log.go:172] (0xc0009cd130) (0xc000675e00) Stream removed, broadcasting: 1\nI0516 00:36:58.612574 2690 log.go:172] (0xc0009cd130) Go away received\nI0516 00:36:58.612920 2690 log.go:172] (0xc0009cd130) (0xc000675e00) Stream removed, broadcasting: 1\nI0516 00:36:58.612945 2690 log.go:172] (0xc0009cd130) (0xc000664fa0) Stream removed, broadcasting: 3\nI0516 00:36:58.612954 2690 log.go:172] (0xc0009cd130) (0xc0005c03c0) Stream removed, broadcasting: 5\n" May 16 00:36:58.616: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 00:36:58.616: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 00:36:58.616: INFO: Waiting for statefulset status.replicas updated to 0 May 16 00:36:58.619: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 16 00:37:08.628: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 00:37:08.628: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 16 00:37:08.628: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 16 00:37:08.660: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999535s May 16 00:37:09.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975778783s May 16 00:37:10.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970011359s May 16 00:37:11.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.965709177s May 16 00:37:12.678: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.960714599s May 16 00:37:13.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.957289968s May 16 00:37:14.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.95128952s May 16 00:37:15.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.946293059s May 16 00:37:16.701: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.940597528s May 16 00:37:17.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.016951ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8402 May 16 00:37:18.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8402 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 00:37:18.944: INFO: stderr: "I0516 00:37:18.852901 2710 log.go:172] (0xc000aaa000) (0xc000151b80) Create stream\nI0516 00:37:18.852994 2710 log.go:172] (0xc000aaa000) (0xc000151b80) Stream added, broadcasting: 1\nI0516 00:37:18.855899 2710 log.go:172] (0xc000aaa000) Reply frame received for 1\nI0516 00:37:18.855944 2710 log.go:172] (0xc000aaa000) (0xc00052ce60) Create stream\nI0516 00:37:18.855963 2710 log.go:172] (0xc000aaa000) (0xc00052ce60) Stream added, broadcasting: 3\nI0516 00:37:18.859121 2710 log.go:172] (0xc000aaa000) Reply frame received for 3\nI0516 00:37:18.859152 2710 log.go:172] (0xc000aaa000) (0xc0005246e0) Create stream\nI0516 00:37:18.859176 2710 log.go:172] (0xc000aaa000) (0xc0005246e0) Stream added, broadcasting: 5\nI0516 00:37:18.862237 2710 log.go:172] (0xc000aaa000) Reply frame received for 5\nI0516 00:37:18.937044 2710 log.go:172] (0xc000aaa000) Data frame received for 3\nI0516 00:37:18.937081 2710 log.go:172] (0xc00052ce60) (3) Data frame handling\nI0516 00:37:18.937093 2710 log.go:172] (0xc00052ce60) (3) Data frame sent\nI0516 00:37:18.937101 2710 log.go:172] (0xc000aaa000) Data frame received for 3\nI0516 00:37:18.937267 2710 log.go:172] (0xc00052ce60) (3) Data frame handling\nI0516 00:37:18.937297 2710 log.go:172] (0xc000aaa000) Data frame received for 5\nI0516 00:37:18.937324 2710 log.go:172] (0xc0005246e0) (5) Data frame handling\nI0516 00:37:18.937351 2710 log.go:172] (0xc0005246e0) (5) Data frame sent\nI0516 00:37:18.937366 2710 log.go:172] (0xc000aaa000) Data frame received for 5\nI0516 00:37:18.937374 2710 log.go:172] (0xc0005246e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 00:37:18.938599 2710 log.go:172] (0xc000aaa000) Data frame received for 1\nI0516 00:37:18.938627 2710 log.go:172] (0xc000151b80) (1) Data frame handling\nI0516 00:37:18.938642 2710 log.go:172] (0xc000151b80) (1) Data frame sent\nI0516 00:37:18.938656 2710 log.go:172] (0xc000aaa000) (0xc000151b80) Stream removed, broadcasting: 1\nI0516 00:37:18.938669 2710 log.go:172] (0xc000aaa000) Go away received\nI0516 00:37:18.938993 2710 log.go:172] (0xc000aaa000) (0xc000151b80) Stream removed, broadcasting: 1\nI0516 00:37:18.939014 2710 log.go:172] (0xc000aaa000) (0xc00052ce60) Stream removed, broadcasting: 3\nI0516 00:37:18.939023 2710 log.go:172] (0xc000aaa000) (0xc0005246e0) Stream removed, broadcasting: 5\n" May 16 00:37:18.944: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 00:37:18.944: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 00:37:18.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8402 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 00:37:19.161: INFO: stderr: "I0516 00:37:19.073988 2730 log.go:172] (0xc000425b80) (0xc000528e60) Create stream\nI0516 00:37:19.074038 2730 log.go:172] (0xc000425b80) (0xc000528e60) Stream added, broadcasting: 1\nI0516 00:37:19.076141 2730 log.go:172] (0xc000425b80) Reply frame received for 1\nI0516 00:37:19.076173 2730 log.go:172] (0xc000425b80) (0xc00023a3c0) Create stream\nI0516 00:37:19.076181 2730 log.go:172] (0xc000425b80) (0xc00023a3c0) Stream added, broadcasting: 3\nI0516 00:37:19.077102 2730 log.go:172] (0xc000425b80) Reply frame received for 3\nI0516 00:37:19.077280 2730 log.go:172] (0xc000425b80) (0xc0006ea6e0) Create stream\nI0516 00:37:19.077292 2730 log.go:172] (0xc000425b80) (0xc0006ea6e0) Stream added, broadcasting: 5\nI0516 00:37:19.078101 2730 log.go:172] (0xc000425b80) Reply frame received for 5\nI0516 00:37:19.155172 2730 log.go:172] (0xc000425b80) Data frame received for 3\nI0516 00:37:19.155216 2730 log.go:172] (0xc00023a3c0) (3) Data frame handling\nI0516 00:37:19.155240 2730 log.go:172] (0xc00023a3c0) (3) Data frame sent\nI0516 00:37:19.155256 2730 log.go:172] (0xc000425b80) Data frame received for 3\nI0516 00:37:19.155271 2730 log.go:172] (0xc00023a3c0) (3) Data frame handling\nI0516 00:37:19.155310 2730 log.go:172] (0xc000425b80) Data frame received for 5\nI0516 00:37:19.155349 2730 log.go:172] (0xc0006ea6e0) (5) Data frame handling\nI0516 00:37:19.155383 2730 log.go:172] (0xc0006ea6e0) (5) Data frame sent\nI0516 00:37:19.155437 2730 log.go:172] (0xc000425b80) Data frame received for 5\nI0516 00:37:19.155462 2730 log.go:172] (0xc0006ea6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 00:37:19.156879 2730 log.go:172] (0xc000425b80) Data frame received for 1\nI0516 00:37:19.156920 2730 log.go:172] (0xc000528e60) (1) Data frame handling\nI0516 00:37:19.156939 2730 log.go:172] (0xc000528e60) (1) Data frame sent\nI0516 00:37:19.156965 2730 log.go:172] (0xc000425b80) (0xc000528e60) Stream removed, broadcasting: 1\nI0516 00:37:19.156993 2730 log.go:172] (0xc000425b80) Go away received\nI0516 00:37:19.157605 2730 log.go:172] (0xc000425b80) (0xc000528e60) Stream removed, broadcasting: 1\nI0516 00:37:19.157645 2730 log.go:172] (0xc000425b80) (0xc00023a3c0) Stream removed, broadcasting: 3\nI0516 00:37:19.157661 2730 log.go:172] (0xc000425b80) (0xc0006ea6e0) Stream removed, broadcasting: 5\n" May 16 00:37:19.161: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 00:37:19.161: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 00:37:19.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8402 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 00:37:19.386: INFO: stderr: "I0516 00:37:19.296727 2752 log.go:172] (0xc000a95340) (0xc0006def00) Create stream\nI0516 00:37:19.296806 2752 log.go:172] (0xc000a95340) (0xc0006def00) Stream added, broadcasting: 1\nI0516 00:37:19.300741 2752 log.go:172] (0xc000a95340) Reply frame received for 1\nI0516 00:37:19.300782 2752 log.go:172] (0xc000a95340) (0xc0006b7c20) Create stream\nI0516 00:37:19.300792 2752 log.go:172] (0xc000a95340) (0xc0006b7c20) Stream added, broadcasting: 3\nI0516 00:37:19.301979 2752 log.go:172] (0xc000a95340) Reply frame received for 3\nI0516 00:37:19.302014 2752 log.go:172] (0xc000a95340) (0xc000450500) Create stream\nI0516 00:37:19.302025 2752 log.go:172] (0xc000a95340) (0xc000450500) Stream added, broadcasting: 5\nI0516 00:37:19.302847 2752 log.go:172] (0xc000a95340) Reply frame received for 5\nI0516 00:37:19.379933 2752 log.go:172] (0xc000a95340) Data frame received for 5\nI0516 00:37:19.379972 2752 log.go:172] (0xc000a95340) Data frame received for 3\nI0516 00:37:19.379999 2752 log.go:172] (0xc0006b7c20) (3) Data frame handling\nI0516 00:37:19.380014 2752 log.go:172] (0xc0006b7c20) (3) Data frame sent\nI0516 00:37:19.380023 2752 log.go:172] (0xc000a95340) Data frame received for 3\nI0516 00:37:19.380029 2752 log.go:172] (0xc0006b7c20) (3) Data frame handling\nI0516 00:37:19.380053 2752 log.go:172] (0xc000450500) (5) Data frame handling\nI0516 00:37:19.380064 2752 log.go:172] (0xc000450500) (5) Data frame sent\nI0516 00:37:19.380074 2752 log.go:172] (0xc000a95340) Data frame received for 5\nI0516 00:37:19.380085 2752 log.go:172] (0xc000450500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 00:37:19.381568 2752 log.go:172] (0xc000a95340) Data frame received for 1\nI0516 00:37:19.381589 2752 log.go:172] (0xc0006def00) (1) Data frame handling\nI0516 00:37:19.381602 2752 log.go:172] (0xc0006def00) (1) Data frame sent\nI0516 00:37:19.381620 2752 log.go:172] (0xc000a95340) (0xc0006def00) Stream removed, broadcasting: 1\nI0516 00:37:19.381635 2752 log.go:172] (0xc000a95340) Go away received\nI0516 00:37:19.381962 2752 log.go:172] (0xc000a95340) (0xc0006def00) Stream removed, broadcasting: 1\nI0516 00:37:19.381990 2752 log.go:172] (0xc000a95340) (0xc0006b7c20) Stream removed, broadcasting: 3\nI0516 00:37:19.381999 2752 log.go:172] (0xc000a95340) (0xc000450500) Stream removed, broadcasting: 5\n" May 16 00:37:19.386: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 00:37:19.386: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 00:37:19.387: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 16 00:37:49.408: INFO: Deleting all statefulset in ns statefulset-8402 May 16 00:37:49.410: INFO: Scaling statefulset ss to 0 May 16 00:37:49.419: INFO: Waiting for statefulset status.replicas updated to 0 May 16 00:37:49.421: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:37:49.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8402" for this suite. • [SLOW TEST:92.394 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":176,"skipped":2847,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:37:49.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:37:50.332: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:37:52.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186270, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186270, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186270, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186270, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:37:55.487: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:38:07.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4971" for this suite. STEP: Destroying namespace "webhook-4971-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.570 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":177,"skipped":2861,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:38:08.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:38:08.089: INFO: Waiting up to 5m0s for pod "busybox-user-65534-75100246-d06d-4ff3-8a33-721c50139753" in namespace "security-context-test-6758" to be "Succeeded or Failed" May 16 00:38:08.149: INFO: Pod "busybox-user-65534-75100246-d06d-4ff3-8a33-721c50139753": Phase="Pending", Reason="", readiness=false. Elapsed: 60.293982ms May 16 00:38:10.153: INFO: Pod "busybox-user-65534-75100246-d06d-4ff3-8a33-721c50139753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06420394s May 16 00:38:12.161: INFO: Pod "busybox-user-65534-75100246-d06d-4ff3-8a33-721c50139753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072294064s May 16 00:38:12.161: INFO: Pod "busybox-user-65534-75100246-d06d-4ff3-8a33-721c50139753" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:38:12.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6758" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":178,"skipped":2873,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:38:12.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ac0427a3-45b1-4402-8bdb-410e31aed3bd STEP: Creating a pod to test consume configMaps May 16 00:38:12.288: INFO: Waiting up to 5m0s for pod "pod-configmaps-6478878d-e383-46a6-bddd-e497dd274996" in namespace "configmap-6594" to be "Succeeded or Failed" May 16 00:38:12.315: INFO: Pod "pod-configmaps-6478878d-e383-46a6-bddd-e497dd274996": Phase="Pending", Reason="", readiness=false. Elapsed: 26.000471ms May 16 00:38:14.318: INFO: Pod "pod-configmaps-6478878d-e383-46a6-bddd-e497dd274996": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029454318s May 16 00:38:16.322: INFO: Pod "pod-configmaps-6478878d-e383-46a6-bddd-e497dd274996": Phase="Running", Reason="", readiness=true. Elapsed: 4.033449529s May 16 00:38:18.327: INFO: Pod "pod-configmaps-6478878d-e383-46a6-bddd-e497dd274996": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038121535s STEP: Saw pod success May 16 00:38:18.327: INFO: Pod "pod-configmaps-6478878d-e383-46a6-bddd-e497dd274996" satisfied condition "Succeeded or Failed" May 16 00:38:18.330: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6478878d-e383-46a6-bddd-e497dd274996 container configmap-volume-test: STEP: delete the pod May 16 00:38:18.376: INFO: Waiting for pod pod-configmaps-6478878d-e383-46a6-bddd-e497dd274996 to disappear May 16 00:38:18.391: INFO: Pod pod-configmaps-6478878d-e383-46a6-bddd-e497dd274996 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:38:18.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6594" for this suite. • [SLOW TEST:6.231 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":179,"skipped":2875,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:38:18.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0516 00:38:31.589410 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 00:38:31.589: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:38:31.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8838" for this suite. • [SLOW TEST:13.456 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":180,"skipped":2895,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:38:31.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 16 00:38:32.233: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 16 00:38:32.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2811' May 16 00:38:33.007: INFO: stderr: "" May 16 00:38:33.008: INFO: stdout: "service/agnhost-slave created\n" May 16 00:38:33.008: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 16 00:38:33.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2811' May 16 00:38:33.415: INFO: stderr: "" May 16 00:38:33.415: INFO: stdout: "service/agnhost-master created\n" May 16 00:38:33.415: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 16 00:38:33.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2811' May 16 00:38:33.711: INFO: stderr: "" May 16 00:38:33.711: INFO: stdout: "service/frontend created\n" May 16 00:38:33.711: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 16 00:38:33.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2811' May 16 00:38:33.950: INFO: stderr: "" May 16 00:38:33.950: INFO: stdout: "deployment.apps/frontend created\n" May 16 00:38:33.950: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 16 00:38:33.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2811' May 16 00:38:34.227: INFO: stderr: "" May 16 00:38:34.227: INFO: stdout: "deployment.apps/agnhost-master created\n" May 16 00:38:34.227: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 16 00:38:34.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2811' May 16 00:38:34.516: INFO: stderr: "" May 16 00:38:34.516: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 16 00:38:34.516: INFO: Waiting for all frontend pods to be Running. May 16 00:38:44.566: INFO: Waiting for frontend to serve content. May 16 00:38:44.581: INFO: Trying to add a new entry to the guestbook. May 16 00:38:44.593: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 16 00:38:44.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2811' May 16 00:38:45.059: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 00:38:45.059: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 16 00:38:45.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2811' May 16 00:38:45.360: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 00:38:45.360: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 16 00:38:45.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2811' May 16 00:38:45.639: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 00:38:45.639: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 16 00:38:45.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2811' May 16 00:38:45.783: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 00:38:45.783: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 16 00:38:45.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2811' May 16 00:38:46.216: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 00:38:46.216: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 16 00:38:46.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2811' May 16 00:38:46.389: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 00:38:46.389: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:38:46.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2811" for this suite. • [SLOW TEST:14.550 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":181,"skipped":2916,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:38:46.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 16 00:38:57.109: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 00:38:57.163: INFO: Pod pod-with-prestop-exec-hook still exists May 16 00:38:59.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 00:38:59.169: INFO: Pod pod-with-prestop-exec-hook still exists May 16 00:39:01.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 00:39:01.168: INFO: Pod pod-with-prestop-exec-hook still exists May 16 00:39:03.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 00:39:03.168: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:39:03.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9255" for this suite. • [SLOW TEST:16.800 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":2918,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:39:03.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:39:14.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7106" for this suite. • [SLOW TEST:11.116 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":183,"skipped":2934,"failed":0} SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:39:14.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9737 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9737 to expose endpoints map[] May 16 00:39:14.478: INFO: Get endpoints failed (24.369928ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 16 00:39:15.482: INFO: successfully validated that service multi-endpoint-test in namespace services-9737 exposes endpoints map[] (1.02828862s elapsed) STEP: Creating pod pod1 in namespace services-9737 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9737 to expose endpoints map[pod1:[100]] May 16 00:39:19.563: INFO: successfully validated that service multi-endpoint-test in namespace services-9737 exposes endpoints map[pod1:[100]] (4.074426672s elapsed) STEP: Creating pod pod2 in namespace services-9737 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9737 to expose endpoints map[pod1:[100] pod2:[101]] May 16 00:39:23.689: INFO: successfully validated that service multi-endpoint-test in namespace services-9737 exposes endpoints map[pod1:[100] pod2:[101]] (4.122461044s elapsed) STEP: Deleting pod pod1 in namespace services-9737 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9737 to expose endpoints map[pod2:[101]] May 16 00:39:24.757: INFO: successfully validated that service multi-endpoint-test in namespace services-9737 exposes endpoints map[pod2:[101]] (1.043072434s elapsed) STEP: Deleting pod pod2 in namespace services-9737 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9737 to expose endpoints map[] May 16 00:39:25.929: INFO: successfully validated that service multi-endpoint-test in namespace services-9737 exposes endpoints map[] (1.164647625s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:39:26.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9737" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.872 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":184,"skipped":2938,"failed":0} SSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:39:26.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 16 00:39:40.458: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:40.458: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:40.497282 7 log.go:172] (0xc00407c210) (0xc001244fa0) Create stream I0516 00:39:40.497314 7 log.go:172] (0xc00407c210) (0xc001244fa0) Stream added, broadcasting: 1 I0516 00:39:40.498721 7 log.go:172] (0xc00407c210) Reply frame received for 1 I0516 00:39:40.498758 7 log.go:172] (0xc00407c210) (0xc001245180) Create stream I0516 00:39:40.498765 7 log.go:172] (0xc00407c210) (0xc001245180) Stream added, broadcasting: 3 I0516 00:39:40.499500 7 log.go:172] (0xc00407c210) Reply frame received for 3 I0516 00:39:40.499531 7 log.go:172] (0xc00407c210) (0xc0014d5680) Create stream I0516 00:39:40.499539 7 log.go:172] (0xc00407c210) (0xc0014d5680) Stream added, broadcasting: 5 I0516 00:39:40.500250 7 log.go:172] (0xc00407c210) Reply frame received for 5 I0516 00:39:40.591619 7 log.go:172] (0xc00407c210) Data frame received for 5 I0516 00:39:40.591665 7 log.go:172] (0xc0014d5680) (5) Data frame handling I0516 00:39:40.591695 7 log.go:172] (0xc00407c210) Data frame received for 3 I0516 00:39:40.591709 7 log.go:172] (0xc001245180) (3) Data frame handling I0516 00:39:40.591733 7 log.go:172] (0xc001245180) (3) Data frame sent I0516 00:39:40.591754 7 log.go:172] (0xc00407c210) Data frame received for 3 I0516 00:39:40.591775 7 log.go:172] (0xc001245180) (3) Data frame handling I0516 00:39:40.593578 7 log.go:172] (0xc00407c210) Data frame received for 1 I0516 00:39:40.593618 7 log.go:172] (0xc001244fa0) (1) Data frame handling I0516 00:39:40.593643 7 log.go:172] (0xc001244fa0) (1) Data frame sent I0516 00:39:40.593662 7 log.go:172] (0xc00407c210) (0xc001244fa0) Stream removed, broadcasting: 1 I0516 00:39:40.593682 7 log.go:172] (0xc00407c210) Go away received I0516 00:39:40.593856 7 log.go:172] (0xc00407c210) (0xc001244fa0) Stream removed, broadcasting: 1 I0516 00:39:40.593890 7 log.go:172] (0xc00407c210) (0xc001245180) Stream removed, broadcasting: 3 I0516 00:39:40.593914 7 log.go:172] (0xc00407c210) (0xc0014d5680) Stream removed, broadcasting: 5 May 16 00:39:40.593: INFO: Exec stderr: "" May 16 00:39:40.593: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:40.594: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:40.623411 7 log.go:172] (0xc001f844d0) (0xc002520280) Create stream I0516 00:39:40.623441 7 log.go:172] (0xc001f844d0) (0xc002520280) Stream added, broadcasting: 1 I0516 00:39:40.625057 7 log.go:172] (0xc001f844d0) Reply frame received for 1 I0516 00:39:40.625082 7 log.go:172] (0xc001f844d0) (0xc0025203c0) Create stream I0516 00:39:40.625091 7 log.go:172] (0xc001f844d0) (0xc0025203c0) Stream added, broadcasting: 3 I0516 00:39:40.626269 7 log.go:172] (0xc001f844d0) Reply frame received for 3 I0516 00:39:40.626310 7 log.go:172] (0xc001f844d0) (0xc0014d5860) Create stream I0516 00:39:40.626348 7 log.go:172] (0xc001f844d0) (0xc0014d5860) Stream added, broadcasting: 5 I0516 00:39:40.627205 7 log.go:172] (0xc001f844d0) Reply frame received for 5 I0516 00:39:40.707009 7 log.go:172] (0xc001f844d0) Data frame received for 3 I0516 00:39:40.707040 7 log.go:172] (0xc0025203c0) (3) Data frame handling I0516 00:39:40.707065 7 log.go:172] (0xc0025203c0) (3) Data frame sent I0516 00:39:40.707078 7 log.go:172] (0xc001f844d0) Data frame received for 3 I0516 00:39:40.707088 7 log.go:172] (0xc0025203c0) (3) Data frame handling I0516 00:39:40.707291 7 log.go:172] (0xc001f844d0) Data frame received for 5 I0516 00:39:40.707319 7 log.go:172] (0xc0014d5860) (5) Data frame handling I0516 00:39:40.708945 7 log.go:172] (0xc001f844d0) Data frame received for 1 I0516 00:39:40.708968 7 log.go:172] (0xc002520280) (1) Data frame handling I0516 00:39:40.708991 7 log.go:172] (0xc002520280) (1) Data frame sent I0516 00:39:40.709013 7 log.go:172] (0xc001f844d0) (0xc002520280) Stream removed, broadcasting: 1 I0516 00:39:40.709036 7 log.go:172] (0xc001f844d0) Go away received I0516 00:39:40.709238 7 log.go:172] (0xc001f844d0) (0xc002520280) Stream removed, broadcasting: 1 I0516 00:39:40.709422 7 log.go:172] (0xc001f844d0) (0xc0025203c0) Stream removed, broadcasting: 3 I0516 00:39:40.709436 7 log.go:172] (0xc001f844d0) (0xc0014d5860) Stream removed, broadcasting: 5 May 16 00:39:40.709: INFO: Exec stderr: "" May 16 00:39:40.709: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:40.709: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:40.738045 7 log.go:172] (0xc001f84c60) (0xc002520820) Create stream I0516 00:39:40.738078 7 log.go:172] (0xc001f84c60) (0xc002520820) Stream added, broadcasting: 1 I0516 00:39:40.739784 7 log.go:172] (0xc001f84c60) Reply frame received for 1 I0516 00:39:40.739812 7 log.go:172] (0xc001f84c60) (0xc0024bae60) Create stream I0516 00:39:40.739822 7 log.go:172] (0xc001f84c60) (0xc0024bae60) Stream added, broadcasting: 3 I0516 00:39:40.740703 7 log.go:172] (0xc001f84c60) Reply frame received for 3 I0516 00:39:40.740729 7 log.go:172] (0xc001f84c60) (0xc0024bb220) Create stream I0516 00:39:40.740740 7 log.go:172] (0xc001f84c60) (0xc0024bb220) Stream added, broadcasting: 5 I0516 00:39:40.741672 7 log.go:172] (0xc001f84c60) Reply frame received for 5 I0516 00:39:40.804829 7 log.go:172] (0xc001f84c60) Data frame received for 5 I0516 00:39:40.804889 7 log.go:172] (0xc0024bb220) (5) Data frame handling I0516 00:39:40.804934 7 log.go:172] (0xc001f84c60) Data frame received for 3 I0516 00:39:40.804957 7 log.go:172] (0xc0024bae60) (3) Data frame handling I0516 00:39:40.804998 7 log.go:172] (0xc0024bae60) (3) Data frame sent I0516 00:39:40.805015 7 log.go:172] (0xc001f84c60) Data frame received for 3 I0516 00:39:40.805027 7 log.go:172] (0xc0024bae60) (3) Data frame handling I0516 00:39:40.806319 7 log.go:172] (0xc001f84c60) Data frame received for 1 I0516 00:39:40.806346 7 log.go:172] (0xc002520820) (1) Data frame handling I0516 00:39:40.806369 7 log.go:172] (0xc002520820) (1) Data frame sent I0516 00:39:40.806431 7 log.go:172] (0xc001f84c60) (0xc002520820) Stream removed, broadcasting: 1 I0516 00:39:40.806493 7 log.go:172] (0xc001f84c60) Go away received I0516 00:39:40.806730 7 log.go:172] (0xc001f84c60) (0xc002520820) Stream removed, broadcasting: 1 I0516 00:39:40.806761 7 log.go:172] (0xc001f84c60) (0xc0024bae60) Stream removed, broadcasting: 3 I0516 00:39:40.806778 7 log.go:172] (0xc001f84c60) (0xc0024bb220) Stream removed, broadcasting: 5 May 16 00:39:40.806: INFO: Exec stderr: "" May 16 00:39:40.806: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:40.806: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:40.836501 7 log.go:172] (0xc00256e420) (0xc0024bb680) Create stream I0516 00:39:40.836531 7 log.go:172] (0xc00256e420) (0xc0024bb680) Stream added, broadcasting: 1 I0516 00:39:40.838794 7 log.go:172] (0xc00256e420) Reply frame received for 1 I0516 00:39:40.838844 7 log.go:172] (0xc00256e420) (0xc0024bb7c0) Create stream I0516 00:39:40.838864 7 log.go:172] (0xc00256e420) (0xc0024bb7c0) Stream added, broadcasting: 3 I0516 00:39:40.839656 7 log.go:172] (0xc00256e420) Reply frame received for 3 I0516 00:39:40.839687 7 log.go:172] (0xc00256e420) (0xc002520a00) Create stream I0516 00:39:40.839699 7 log.go:172] (0xc00256e420) (0xc002520a00) Stream added, broadcasting: 5 I0516 00:39:40.840432 7 log.go:172] (0xc00256e420) Reply frame received for 5 I0516 00:39:40.910272 7 log.go:172] (0xc00256e420) Data frame received for 5 I0516 00:39:40.910304 7 log.go:172] (0xc002520a00) (5) Data frame handling I0516 00:39:40.910328 7 log.go:172] (0xc00256e420) Data frame received for 3 I0516 00:39:40.910354 7 log.go:172] (0xc0024bb7c0) (3) Data frame handling I0516 00:39:40.910373 7 log.go:172] (0xc0024bb7c0) (3) Data frame sent I0516 00:39:40.910382 7 log.go:172] (0xc00256e420) Data frame received for 3 I0516 00:39:40.910388 7 log.go:172] (0xc0024bb7c0) (3) Data frame handling I0516 00:39:40.911868 7 log.go:172] (0xc00256e420) Data frame received for 1 I0516 00:39:40.911891 7 log.go:172] (0xc0024bb680) (1) Data frame handling I0516 00:39:40.911902 7 log.go:172] (0xc0024bb680) (1) Data frame sent I0516 00:39:40.911928 7 log.go:172] (0xc00256e420) (0xc0024bb680) Stream removed, broadcasting: 1 I0516 00:39:40.912109 7 log.go:172] (0xc00256e420) (0xc0024bb680) Stream removed, broadcasting: 1 I0516 00:39:40.912143 7 log.go:172] (0xc00256e420) Go away received I0516 00:39:40.912214 7 log.go:172] (0xc00256e420) (0xc0024bb7c0) Stream removed, broadcasting: 3 I0516 00:39:40.912246 7 log.go:172] (0xc00256e420) (0xc002520a00) Stream removed, broadcasting: 5 May 16 00:39:40.912: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 16 00:39:40.912: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:40.912: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:40.945784 7 log.go:172] (0xc0028504d0) (0xc0014d5cc0) Create stream I0516 00:39:40.945812 7 log.go:172] (0xc0028504d0) (0xc0014d5cc0) Stream added, broadcasting: 1 I0516 00:39:40.947972 7 log.go:172] (0xc0028504d0) Reply frame received for 1 I0516 00:39:40.948027 7 log.go:172] (0xc0028504d0) (0xc0024bb900) Create stream I0516 00:39:40.948049 7 log.go:172] (0xc0028504d0) (0xc0024bb900) Stream added, broadcasting: 3 I0516 00:39:40.949438 7 log.go:172] (0xc0028504d0) Reply frame received for 3 I0516 00:39:40.949504 7 log.go:172] (0xc0028504d0) (0xc002585ea0) Create stream I0516 00:39:40.949538 7 log.go:172] (0xc0028504d0) (0xc002585ea0) Stream added, broadcasting: 5 I0516 00:39:40.950565 7 log.go:172] (0xc0028504d0) Reply frame received for 5 I0516 00:39:41.002360 7 log.go:172] (0xc0028504d0) Data frame received for 5 I0516 00:39:41.002394 7 log.go:172] (0xc002585ea0) (5) Data frame handling I0516 00:39:41.002430 7 log.go:172] (0xc0028504d0) Data frame received for 3 I0516 00:39:41.002443 7 log.go:172] (0xc0024bb900) (3) Data frame handling I0516 00:39:41.002458 7 log.go:172] (0xc0024bb900) (3) Data frame sent I0516 00:39:41.002471 7 log.go:172] (0xc0028504d0) Data frame received for 3 I0516 00:39:41.002483 7 log.go:172] (0xc0024bb900) (3) Data frame handling I0516 00:39:41.003736 7 log.go:172] (0xc0028504d0) Data frame received for 1 I0516 00:39:41.003785 7 log.go:172] (0xc0014d5cc0) (1) Data frame handling I0516 00:39:41.003810 7 log.go:172] (0xc0014d5cc0) (1) Data frame sent I0516 00:39:41.003829 7 log.go:172] (0xc0028504d0) (0xc0014d5cc0) Stream removed, broadcasting: 1 I0516 00:39:41.003903 7 log.go:172] (0xc0028504d0) Go away received I0516 00:39:41.003950 7 log.go:172] (0xc0028504d0) (0xc0014d5cc0) Stream removed, broadcasting: 1 I0516 00:39:41.004056 7 log.go:172] (0xc0028504d0) (0xc0024bb900) Stream removed, broadcasting: 3 I0516 00:39:41.004153 7 log.go:172] (0xc0028504d0) (0xc002585ea0) Stream removed, broadcasting: 5 May 16 00:39:41.004: INFO: Exec stderr: "" May 16 00:39:41.004: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:41.004: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:41.031997 7 log.go:172] (0xc00256ebb0) (0xc0024bba40) Create stream I0516 00:39:41.032043 7 log.go:172] (0xc00256ebb0) (0xc0024bba40) Stream added, broadcasting: 1 I0516 00:39:41.034397 7 log.go:172] (0xc00256ebb0) Reply frame received for 1 I0516 00:39:41.034430 7 log.go:172] (0xc00256ebb0) (0xc001868000) Create stream I0516 00:39:41.034440 7 log.go:172] (0xc00256ebb0) (0xc001868000) Stream added, broadcasting: 3 I0516 00:39:41.035458 7 log.go:172] (0xc00256ebb0) Reply frame received for 3 I0516 00:39:41.035500 7 log.go:172] (0xc00256ebb0) (0xc002585f40) Create stream I0516 00:39:41.035517 7 log.go:172] (0xc00256ebb0) (0xc002585f40) Stream added, broadcasting: 5 I0516 00:39:41.036381 7 log.go:172] (0xc00256ebb0) Reply frame received for 5 I0516 00:39:41.098351 7 log.go:172] (0xc00256ebb0) Data frame received for 5 I0516 00:39:41.098390 7 log.go:172] (0xc002585f40) (5) Data frame handling I0516 00:39:41.098413 7 log.go:172] (0xc00256ebb0) Data frame received for 3 I0516 00:39:41.098421 7 log.go:172] (0xc001868000) (3) Data frame handling I0516 00:39:41.098438 7 log.go:172] (0xc001868000) (3) Data frame sent I0516 00:39:41.098450 7 log.go:172] (0xc00256ebb0) Data frame received for 3 I0516 00:39:41.098459 7 log.go:172] (0xc001868000) (3) Data frame handling I0516 00:39:41.099620 7 log.go:172] (0xc00256ebb0) Data frame received for 1 I0516 00:39:41.099641 7 log.go:172] (0xc0024bba40) (1) Data frame handling I0516 00:39:41.099657 7 log.go:172] (0xc0024bba40) (1) Data frame sent I0516 00:39:41.099669 7 log.go:172] (0xc00256ebb0) (0xc0024bba40) Stream removed, broadcasting: 1 I0516 00:39:41.099686 7 log.go:172] (0xc00256ebb0) Go away received I0516 00:39:41.099823 7 log.go:172] (0xc00256ebb0) (0xc0024bba40) Stream removed, broadcasting: 1 I0516 00:39:41.099838 7 log.go:172] (0xc00256ebb0) (0xc001868000) Stream removed, broadcasting: 3 I0516 00:39:41.099845 7 log.go:172] (0xc00256ebb0) (0xc002585f40) Stream removed, broadcasting: 5 May 16 00:39:41.099: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 16 00:39:41.099: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:41.099: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:41.130021 7 log.go:172] (0xc001f853f0) (0xc002520dc0) Create stream I0516 00:39:41.130049 7 log.go:172] (0xc001f853f0) (0xc002520dc0) Stream added, broadcasting: 1 I0516 00:39:41.138216 7 log.go:172] (0xc001f853f0) Reply frame received for 1 I0516 00:39:41.138261 7 log.go:172] (0xc001f853f0) (0xc002520e60) Create stream I0516 00:39:41.138299 7 log.go:172] (0xc001f853f0) (0xc002520e60) Stream added, broadcasting: 3 I0516 00:39:41.139847 7 log.go:172] (0xc001f853f0) Reply frame received for 3 I0516 00:39:41.139878 7 log.go:172] (0xc001f853f0) (0xc002520f00) Create stream I0516 00:39:41.139896 7 log.go:172] (0xc001f853f0) (0xc002520f00) Stream added, broadcasting: 5 I0516 00:39:41.141806 7 log.go:172] (0xc001f853f0) Reply frame received for 5 I0516 00:39:41.189601 7 log.go:172] (0xc001f853f0) Data frame received for 3 I0516 00:39:41.189700 7 log.go:172] (0xc002520e60) (3) Data frame handling I0516 00:39:41.189765 7 log.go:172] (0xc002520e60) (3) Data frame sent I0516 00:39:41.189791 7 log.go:172] (0xc001f853f0) Data frame received for 3 I0516 00:39:41.189801 7 log.go:172] (0xc002520e60) (3) Data frame handling I0516 00:39:41.189927 7 log.go:172] (0xc001f853f0) Data frame received for 5 I0516 00:39:41.189954 7 log.go:172] (0xc002520f00) (5) Data frame handling I0516 00:39:41.191897 7 log.go:172] (0xc001f853f0) Data frame received for 1 I0516 00:39:41.191931 7 log.go:172] (0xc002520dc0) (1) Data frame handling I0516 00:39:41.191955 7 log.go:172] (0xc002520dc0) (1) Data frame sent I0516 00:39:41.191973 7 log.go:172] (0xc001f853f0) (0xc002520dc0) Stream removed, broadcasting: 1 I0516 00:39:41.192074 7 log.go:172] (0xc001f853f0) (0xc002520dc0) Stream removed, broadcasting: 1 I0516 00:39:41.192094 7 log.go:172] (0xc001f853f0) (0xc002520e60) Stream removed, broadcasting: 3 I0516 00:39:41.192115 7 log.go:172] (0xc001f853f0) (0xc002520f00) Stream removed, broadcasting: 5 May 16 00:39:41.192: INFO: Exec stderr: "" May 16 00:39:41.192: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:41.192: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:41.193371 7 log.go:172] (0xc001f853f0) Go away received I0516 00:39:41.226805 7 log.go:172] (0xc002850bb0) (0xc0018683c0) Create stream I0516 00:39:41.226860 7 log.go:172] (0xc002850bb0) (0xc0018683c0) Stream added, broadcasting: 1 I0516 00:39:41.229007 7 log.go:172] (0xc002850bb0) Reply frame received for 1 I0516 00:39:41.229055 7 log.go:172] (0xc002850bb0) (0xc001e96140) Create stream I0516 00:39:41.229069 7 log.go:172] (0xc002850bb0) (0xc001e96140) Stream added, broadcasting: 3 I0516 00:39:41.230210 7 log.go:172] (0xc002850bb0) Reply frame received for 3 I0516 00:39:41.230242 7 log.go:172] (0xc002850bb0) (0xc0024bbae0) Create stream I0516 00:39:41.230254 7 log.go:172] (0xc002850bb0) (0xc0024bbae0) Stream added, broadcasting: 5 I0516 00:39:41.231147 7 log.go:172] (0xc002850bb0) Reply frame received for 5 I0516 00:39:41.287991 7 log.go:172] (0xc002850bb0) Data frame received for 5 I0516 00:39:41.288034 7 log.go:172] (0xc0024bbae0) (5) Data frame handling I0516 00:39:41.288060 7 log.go:172] (0xc002850bb0) Data frame received for 3 I0516 00:39:41.288073 7 log.go:172] (0xc001e96140) (3) Data frame handling I0516 00:39:41.288085 7 log.go:172] (0xc001e96140) (3) Data frame sent I0516 00:39:41.288103 7 log.go:172] (0xc002850bb0) Data frame received for 3 I0516 00:39:41.288119 7 log.go:172] (0xc001e96140) (3) Data frame handling I0516 00:39:41.289656 7 log.go:172] (0xc002850bb0) Data frame received for 1 I0516 00:39:41.289706 7 log.go:172] (0xc0018683c0) (1) Data frame handling I0516 00:39:41.289733 7 log.go:172] (0xc0018683c0) (1) Data frame sent I0516 00:39:41.289754 7 log.go:172] (0xc002850bb0) (0xc0018683c0) Stream removed, broadcasting: 1 I0516 00:39:41.289775 7 log.go:172] (0xc002850bb0) Go away received I0516 00:39:41.289891 7 log.go:172] (0xc002850bb0) (0xc0018683c0) Stream removed, broadcasting: 1 I0516 00:39:41.289918 7 log.go:172] (0xc002850bb0) (0xc001e96140) Stream removed, broadcasting: 3 I0516 00:39:41.289930 7 log.go:172] (0xc002850bb0) (0xc0024bbae0) Stream removed, broadcasting: 5 May 16 00:39:41.289: INFO: Exec stderr: "" May 16 00:39:41.289: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:41.290: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:41.323647 7 log.go:172] (0xc00256f1e0) (0xc0024bbd60) Create stream I0516 00:39:41.323670 7 log.go:172] (0xc00256f1e0) (0xc0024bbd60) Stream added, broadcasting: 1 I0516 00:39:41.326025 7 log.go:172] (0xc00256f1e0) Reply frame received for 1 I0516 00:39:41.326059 7 log.go:172] (0xc00256f1e0) (0xc0024bbe00) Create stream I0516 00:39:41.326073 7 log.go:172] (0xc00256f1e0) (0xc0024bbe00) Stream added, broadcasting: 3 I0516 00:39:41.326999 7 log.go:172] (0xc00256f1e0) Reply frame received for 3 I0516 00:39:41.327044 7 log.go:172] (0xc00256f1e0) (0xc001e96280) Create stream I0516 00:39:41.327059 7 log.go:172] (0xc00256f1e0) (0xc001e96280) Stream added, broadcasting: 5 I0516 00:39:41.328009 7 log.go:172] (0xc00256f1e0) Reply frame received for 5 I0516 00:39:41.389329 7 log.go:172] (0xc00256f1e0) Data frame received for 5 I0516 00:39:41.389405 7 log.go:172] (0xc001e96280) (5) Data frame handling I0516 00:39:41.389431 7 log.go:172] (0xc00256f1e0) Data frame received for 3 I0516 00:39:41.389447 7 log.go:172] (0xc0024bbe00) (3) Data frame handling I0516 00:39:41.389457 7 log.go:172] (0xc0024bbe00) (3) Data frame sent I0516 00:39:41.389464 7 log.go:172] (0xc00256f1e0) Data frame received for 3 I0516 00:39:41.389469 7 log.go:172] (0xc0024bbe00) (3) Data frame handling I0516 00:39:41.390659 7 log.go:172] (0xc00256f1e0) Data frame received for 1 I0516 00:39:41.390683 7 log.go:172] (0xc0024bbd60) (1) Data frame handling I0516 00:39:41.390694 7 log.go:172] (0xc0024bbd60) (1) Data frame sent I0516 00:39:41.390709 7 log.go:172] (0xc00256f1e0) (0xc0024bbd60) Stream removed, broadcasting: 1 I0516 00:39:41.390721 7 log.go:172] (0xc00256f1e0) Go away received I0516 00:39:41.390836 7 log.go:172] (0xc00256f1e0) (0xc0024bbd60) Stream removed, broadcasting: 1 I0516 00:39:41.390906 7 log.go:172] (0xc00256f1e0) (0xc0024bbe00) Stream removed, broadcasting: 3 I0516 00:39:41.390927 7 log.go:172] (0xc00256f1e0) (0xc001e96280) Stream removed, broadcasting: 5 May 16 00:39:41.390: INFO: Exec stderr: "" May 16 00:39:41.390: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:39:41.391: INFO: >>> kubeConfig: /root/.kube/config I0516 00:39:41.417981 7 log.go:172] (0xc001f85a20) (0xc0025212c0) Create stream I0516 00:39:41.418001 7 log.go:172] (0xc001f85a20) (0xc0025212c0) Stream added, broadcasting: 1 I0516 00:39:41.419991 7 log.go:172] (0xc001f85a20) Reply frame received for 1 I0516 00:39:41.420050 7 log.go:172] (0xc001f85a20) (0xc001868460) Create stream I0516 00:39:41.420077 7 log.go:172] (0xc001f85a20) (0xc001868460) Stream added, broadcasting: 3 I0516 00:39:41.421098 7 log.go:172] (0xc001f85a20) Reply frame received for 3 I0516 00:39:41.421332 7 log.go:172] (0xc001f85a20) (0xc001e96640) Create stream I0516 00:39:41.421351 7 log.go:172] (0xc001f85a20) (0xc001e96640) Stream added, broadcasting: 5 I0516 00:39:41.422305 7 log.go:172] (0xc001f85a20) Reply frame received for 5 I0516 00:39:41.493002 7 log.go:172] (0xc001f85a20) Data frame received for 3 I0516 00:39:41.493040 7 log.go:172] (0xc001868460) (3) Data frame handling I0516 00:39:41.493055 7 log.go:172] (0xc001868460) (3) Data frame sent I0516 00:39:41.493081 7 log.go:172] (0xc001f85a20) Data frame received for 3 I0516 00:39:41.493308 7 log.go:172] (0xc001f85a20) Data frame received for 5 I0516 00:39:41.493368 7 log.go:172] (0xc001e96640) (5) Data frame handling I0516 00:39:41.493411 7 log.go:172] (0xc001868460) (3) Data frame handling I0516 00:39:41.494815 7 log.go:172] (0xc001f85a20) Data frame received for 1 I0516 00:39:41.494829 7 log.go:172] (0xc0025212c0) (1) Data frame handling I0516 00:39:41.494846 7 log.go:172] (0xc0025212c0) (1) Data frame sent I0516 00:39:41.494861 7 log.go:172] (0xc001f85a20) (0xc0025212c0) Stream removed, broadcasting: 1 I0516 00:39:41.494933 7 log.go:172] (0xc001f85a20) (0xc0025212c0) Stream removed, broadcasting: 1 I0516 00:39:41.494943 7 log.go:172] (0xc001f85a20) (0xc001868460) Stream removed, broadcasting: 3 I0516 00:39:41.495063 7 log.go:172] (0xc001f85a20) Go away received I0516 00:39:41.495105 7 log.go:172] (0xc001f85a20) (0xc001e96640) Stream removed, broadcasting: 5 May 16 00:39:41.495: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:39:41.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7576" for this suite. • [SLOW TEST:15.309 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":185,"skipped":2942,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:39:41.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 16 00:39:41.592: INFO: Waiting up to 5m0s for pod "pod-1ee1ea05-2324-4ba4-8745-01216a2292e0" in namespace "emptydir-6680" to be "Succeeded or Failed" May 16 00:39:41.611: INFO: Pod "pod-1ee1ea05-2324-4ba4-8745-01216a2292e0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.86355ms May 16 00:39:43.616: INFO: Pod "pod-1ee1ea05-2324-4ba4-8745-01216a2292e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023258833s May 16 00:39:45.620: INFO: Pod "pod-1ee1ea05-2324-4ba4-8745-01216a2292e0": Phase="Running", Reason="", readiness=true. Elapsed: 4.027534661s May 16 00:39:47.623: INFO: Pod "pod-1ee1ea05-2324-4ba4-8745-01216a2292e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03084818s STEP: Saw pod success May 16 00:39:47.623: INFO: Pod "pod-1ee1ea05-2324-4ba4-8745-01216a2292e0" satisfied condition "Succeeded or Failed" May 16 00:39:47.626: INFO: Trying to get logs from node latest-worker2 pod pod-1ee1ea05-2324-4ba4-8745-01216a2292e0 container test-container: STEP: delete the pod May 16 00:39:47.663: INFO: Waiting for pod pod-1ee1ea05-2324-4ba4-8745-01216a2292e0 to disappear May 16 00:39:47.680: INFO: Pod pod-1ee1ea05-2324-4ba4-8745-01216a2292e0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:39:47.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6680" for this suite. • [SLOW TEST:6.211 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":186,"skipped":2975,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:39:47.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3305 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3305 STEP: Creating statefulset with conflicting port in namespace statefulset-3305 STEP: Waiting until pod test-pod will start running in namespace statefulset-3305 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3305 May 16 00:39:53.931: INFO: Observed stateful pod in namespace: statefulset-3305, name: ss-0, uid: b99ce905-cd51-4fdd-ba04-4fd060f887c4, status phase: Pending. Waiting for statefulset controller to delete. May 16 00:39:54.279: INFO: Observed stateful pod in namespace: statefulset-3305, name: ss-0, uid: b99ce905-cd51-4fdd-ba04-4fd060f887c4, status phase: Failed. Waiting for statefulset controller to delete. May 16 00:39:54.311: INFO: Observed stateful pod in namespace: statefulset-3305, name: ss-0, uid: b99ce905-cd51-4fdd-ba04-4fd060f887c4, status phase: Failed. Waiting for statefulset controller to delete. May 16 00:39:54.323: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3305 STEP: Removing pod with conflicting port in namespace statefulset-3305 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3305 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 16 00:40:00.434: INFO: Deleting all statefulset in ns statefulset-3305 May 16 00:40:00.437: INFO: Scaling statefulset ss to 0 May 16 00:40:20.491: INFO: Waiting for statefulset status.replicas updated to 0 May 16 00:40:20.494: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:40:20.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3305" for this suite. • [SLOW TEST:32.806 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":187,"skipped":2991,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:40:20.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 16 00:40:20.565: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 16 00:40:21.183: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 16 00:40:23.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186421, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186421, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:40:26.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186421, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186421, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:40:29.500: INFO: Waited 1.529200912s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:40:29.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6910" for this suite. • [SLOW TEST:9.545 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":188,"skipped":3020,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:40:30.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 16 00:40:30.647: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8188 /api/v1/namespaces/watch-8188/configmaps/e2e-watch-test-watch-closed 3980576c-01c1-4f26-bfa4-4038a67fbc8a 5018428 0 2020-05-16 00:40:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-16 00:40:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 16 00:40:30.647: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8188 /api/v1/namespaces/watch-8188/configmaps/e2e-watch-test-watch-closed 3980576c-01c1-4f26-bfa4-4038a67fbc8a 5018429 0 2020-05-16 00:40:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-16 00:40:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 16 00:40:30.936: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8188 /api/v1/namespaces/watch-8188/configmaps/e2e-watch-test-watch-closed 3980576c-01c1-4f26-bfa4-4038a67fbc8a 5018432 0 2020-05-16 00:40:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-16 00:40:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 00:40:30.937: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8188 /api/v1/namespaces/watch-8188/configmaps/e2e-watch-test-watch-closed 3980576c-01c1-4f26-bfa4-4038a67fbc8a 5018434 0 2020-05-16 00:40:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-16 00:40:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:40:30.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8188" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":189,"skipped":3024,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:40:31.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-24844239-eae8-4356-95d5-cc9e8c0c81a3 STEP: Creating a pod to test consume secrets May 16 00:40:31.580: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca88fa06-bf88-45dc-9470-20f1bcdba1ea" in namespace "projected-1730" to be "Succeeded or Failed" May 16 00:40:31.630: INFO: Pod "pod-projected-secrets-ca88fa06-bf88-45dc-9470-20f1bcdba1ea": Phase="Pending", Reason="", readiness=false. Elapsed: 49.29311ms May 16 00:40:33.677: INFO: Pod "pod-projected-secrets-ca88fa06-bf88-45dc-9470-20f1bcdba1ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096938515s May 16 00:40:35.682: INFO: Pod "pod-projected-secrets-ca88fa06-bf88-45dc-9470-20f1bcdba1ea": Phase="Running", Reason="", readiness=true. Elapsed: 4.101309095s May 16 00:40:37.686: INFO: Pod "pod-projected-secrets-ca88fa06-bf88-45dc-9470-20f1bcdba1ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105085613s STEP: Saw pod success May 16 00:40:37.686: INFO: Pod "pod-projected-secrets-ca88fa06-bf88-45dc-9470-20f1bcdba1ea" satisfied condition "Succeeded or Failed" May 16 00:40:37.689: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ca88fa06-bf88-45dc-9470-20f1bcdba1ea container secret-volume-test: STEP: delete the pod May 16 00:40:37.738: INFO: Waiting for pod pod-projected-secrets-ca88fa06-bf88-45dc-9470-20f1bcdba1ea to disappear May 16 00:40:37.766: INFO: Pod pod-projected-secrets-ca88fa06-bf88-45dc-9470-20f1bcdba1ea no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:40:37.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1730" for this suite. • [SLOW TEST:6.488 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":190,"skipped":3033,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:40:37.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 16 00:40:37.843: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:40:45.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7853" for this suite. • [SLOW TEST:8.083 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":191,"skipped":3054,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:40:45.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-2dnn STEP: Creating a pod to test atomic-volume-subpath May 16 00:40:45.969: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2dnn" in namespace "subpath-3935" to be "Succeeded or Failed" May 16 00:40:46.103: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Pending", Reason="", readiness=false. Elapsed: 133.746296ms May 16 00:40:48.108: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138151269s May 16 00:40:50.112: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 4.142817758s May 16 00:40:52.116: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 6.146253893s May 16 00:40:54.120: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 8.150162598s May 16 00:40:56.124: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 10.154766538s May 16 00:40:58.133: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 12.163722432s May 16 00:41:00.138: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 14.168881499s May 16 00:41:02.143: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 16.173320275s May 16 00:41:04.146: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 18.177040461s May 16 00:41:06.150: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 20.180142007s May 16 00:41:08.153: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Running", Reason="", readiness=true. Elapsed: 22.184028475s May 16 00:41:10.158: INFO: Pod "pod-subpath-test-projected-2dnn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.188338309s STEP: Saw pod success May 16 00:41:10.158: INFO: Pod "pod-subpath-test-projected-2dnn" satisfied condition "Succeeded or Failed" May 16 00:41:10.160: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-2dnn container test-container-subpath-projected-2dnn: STEP: delete the pod May 16 00:41:10.286: INFO: Waiting for pod pod-subpath-test-projected-2dnn to disappear May 16 00:41:10.298: INFO: Pod pod-subpath-test-projected-2dnn no longer exists STEP: Deleting pod pod-subpath-test-projected-2dnn May 16 00:41:10.298: INFO: Deleting pod "pod-subpath-test-projected-2dnn" in namespace "subpath-3935" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:41:10.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3935" for this suite. • [SLOW TEST:24.451 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":192,"skipped":3118,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:41:10.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:41:14.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1057" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":3123,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:41:14.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-t6g4 STEP: Creating a pod to test atomic-volume-subpath May 16 00:41:14.668: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-t6g4" in namespace "subpath-7036" to be "Succeeded or Failed" May 16 00:41:14.687: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.084432ms May 16 00:41:16.726: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057158407s May 16 00:41:18.730: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 4.061792931s May 16 00:41:20.734: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 6.065804344s May 16 00:41:22.739: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 8.070540201s May 16 00:41:24.742: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 10.073104249s May 16 00:41:26.746: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 12.077466404s May 16 00:41:28.750: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 14.081655499s May 16 00:41:30.821: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 16.152855916s May 16 00:41:32.826: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 18.157221026s May 16 00:41:34.830: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 20.161416932s May 16 00:41:36.833: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 22.164715483s May 16 00:41:38.838: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Running", Reason="", readiness=true. Elapsed: 24.16946144s May 16 00:41:40.842: INFO: Pod "pod-subpath-test-secret-t6g4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.173112027s STEP: Saw pod success May 16 00:41:40.842: INFO: Pod "pod-subpath-test-secret-t6g4" satisfied condition "Succeeded or Failed" May 16 00:41:40.844: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-t6g4 container test-container-subpath-secret-t6g4: STEP: delete the pod May 16 00:41:40.903: INFO: Waiting for pod pod-subpath-test-secret-t6g4 to disappear May 16 00:41:40.941: INFO: Pod pod-subpath-test-secret-t6g4 no longer exists STEP: Deleting pod pod-subpath-test-secret-t6g4 May 16 00:41:40.941: INFO: Deleting pod "pod-subpath-test-secret-t6g4" in namespace "subpath-7036" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:41:40.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7036" for this suite. • [SLOW TEST:26.511 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":194,"skipped":3136,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:41:40.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-1eb4e398-e5dc-468b-875a-105239dd7509 STEP: Creating a pod to test consume configMaps May 16 00:41:41.019: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8da0b62-9b5b-4a45-bad1-5fc045d36313" in namespace "configmap-7313" to be "Succeeded or Failed" May 16 00:41:41.067: INFO: Pod "pod-configmaps-c8da0b62-9b5b-4a45-bad1-5fc045d36313": Phase="Pending", Reason="", readiness=false. Elapsed: 48.369217ms May 16 00:41:43.182: INFO: Pod "pod-configmaps-c8da0b62-9b5b-4a45-bad1-5fc045d36313": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162978884s May 16 00:41:45.412: INFO: Pod "pod-configmaps-c8da0b62-9b5b-4a45-bad1-5fc045d36313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.393338909s STEP: Saw pod success May 16 00:41:45.412: INFO: Pod "pod-configmaps-c8da0b62-9b5b-4a45-bad1-5fc045d36313" satisfied condition "Succeeded or Failed" May 16 00:41:45.416: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c8da0b62-9b5b-4a45-bad1-5fc045d36313 container configmap-volume-test: STEP: delete the pod May 16 00:41:45.447: INFO: Waiting for pod pod-configmaps-c8da0b62-9b5b-4a45-bad1-5fc045d36313 to disappear May 16 00:41:45.465: INFO: Pod pod-configmaps-c8da0b62-9b5b-4a45-bad1-5fc045d36313 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:41:45.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7313" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":195,"skipped":3150,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:41:45.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-m8tb STEP: Creating a pod to test atomic-volume-subpath May 16 00:41:45.567: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m8tb" in namespace "subpath-8480" to be "Succeeded or Failed" May 16 00:41:45.570: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.661744ms May 16 00:41:47.573: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005428845s May 16 00:41:49.577: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 4.009439806s May 16 00:41:51.581: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 6.013308059s May 16 00:41:53.585: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 8.01707638s May 16 00:41:55.588: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 10.02053308s May 16 00:41:57.601: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 12.033087741s May 16 00:41:59.630: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 14.062965328s May 16 00:42:01.654: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 16.086970035s May 16 00:42:03.658: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 18.090896536s May 16 00:42:05.663: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 20.09593752s May 16 00:42:07.678: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Running", Reason="", readiness=true. Elapsed: 22.110955509s May 16 00:42:09.682: INFO: Pod "pod-subpath-test-configmap-m8tb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.114987406s STEP: Saw pod success May 16 00:42:09.682: INFO: Pod "pod-subpath-test-configmap-m8tb" satisfied condition "Succeeded or Failed" May 16 00:42:09.685: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-m8tb container test-container-subpath-configmap-m8tb: STEP: delete the pod May 16 00:42:09.784: INFO: Waiting for pod pod-subpath-test-configmap-m8tb to disappear May 16 00:42:09.815: INFO: Pod pod-subpath-test-configmap-m8tb no longer exists STEP: Deleting pod pod-subpath-test-configmap-m8tb May 16 00:42:09.815: INFO: Deleting pod "pod-subpath-test-configmap-m8tb" in namespace "subpath-8480" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:42:09.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8480" for this suite. • [SLOW TEST:24.413 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":196,"skipped":3154,"failed":0} [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:42:09.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-4472/configmap-test-0d20e58a-3849-43ad-846d-1e5362c96a91 STEP: Creating a pod to test consume configMaps May 16 00:42:10.008: INFO: Waiting up to 5m0s for pod "pod-configmaps-7506431d-34f5-49db-a4bc-362be8d2a25a" in namespace "configmap-4472" to be "Succeeded or Failed" May 16 00:42:10.014: INFO: Pod "pod-configmaps-7506431d-34f5-49db-a4bc-362be8d2a25a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412797ms May 16 00:42:12.018: INFO: Pod "pod-configmaps-7506431d-34f5-49db-a4bc-362be8d2a25a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010604432s May 16 00:42:14.023: INFO: Pod "pod-configmaps-7506431d-34f5-49db-a4bc-362be8d2a25a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014834109s STEP: Saw pod success May 16 00:42:14.023: INFO: Pod "pod-configmaps-7506431d-34f5-49db-a4bc-362be8d2a25a" satisfied condition "Succeeded or Failed" May 16 00:42:14.026: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7506431d-34f5-49db-a4bc-362be8d2a25a container env-test: STEP: delete the pod May 16 00:42:14.080: INFO: Waiting for pod pod-configmaps-7506431d-34f5-49db-a4bc-362be8d2a25a to disappear May 16 00:42:14.092: INFO: Pod pod-configmaps-7506431d-34f5-49db-a4bc-362be8d2a25a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:42:14.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4472" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:42:14.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9716, will wait for the garbage collector to delete the pods May 16 00:42:20.245: INFO: Deleting Job.batch foo took: 5.835989ms May 16 00:42:20.545: INFO: Terminating Job.batch foo pods took: 300.211888ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:43:05.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9716" for this suite. • [SLOW TEST:51.254 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":198,"skipped":3198,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:43:05.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9684.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9684.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9684.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9684.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9684.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9684.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 00:43:13.600: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:13.604: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:13.608: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:13.612: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:13.623: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:13.626: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:13.629: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:13.632: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:13.637: INFO: Lookups using dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local] May 16 00:43:18.642: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:18.646: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:18.650: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:18.653: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:18.663: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:18.666: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:18.669: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:18.672: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:18.679: INFO: Lookups using dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local] May 16 00:43:23.641: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:23.645: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:23.648: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:23.651: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:23.658: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:23.660: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:23.663: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:23.665: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:23.670: INFO: Lookups using dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local] May 16 00:43:28.643: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:28.647: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:28.650: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:28.652: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:28.662: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:28.665: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:28.668: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:28.670: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:28.675: INFO: Lookups using dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local] May 16 00:43:33.647: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:33.651: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:33.654: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:33.657: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:33.665: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:33.667: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:33.669: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:33.671: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:33.676: INFO: Lookups using dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local] May 16 00:43:38.641: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:38.651: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:38.654: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:38.678: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:38.687: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:38.690: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:38.693: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:38.695: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local from pod dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5: the server could not find the requested resource (get pods dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5) May 16 00:43:38.740: INFO: Lookups using dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9684.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9684.svc.cluster.local jessie_udp@dns-test-service-2.dns-9684.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9684.svc.cluster.local] May 16 00:43:43.675: INFO: DNS probes using dns-9684/dns-test-f85e610d-e813-4ce1-88a9-12bc85d673b5 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:43:43.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9684" for this suite. • [SLOW TEST:38.898 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":199,"skipped":3215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:43:44.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-3330/secret-test-0d588ac7-3b76-4485-bb4e-f614c5e56588 STEP: Creating a pod to test consume secrets May 16 00:43:44.415: INFO: Waiting up to 5m0s for pod "pod-configmaps-790c5763-cb86-4460-9d32-91db1161b777" in namespace "secrets-3330" to be "Succeeded or Failed" May 16 00:43:44.456: INFO: Pod "pod-configmaps-790c5763-cb86-4460-9d32-91db1161b777": Phase="Pending", Reason="", readiness=false. Elapsed: 41.212262ms May 16 00:43:46.762: INFO: Pod "pod-configmaps-790c5763-cb86-4460-9d32-91db1161b777": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347826846s May 16 00:43:48.806: INFO: Pod "pod-configmaps-790c5763-cb86-4460-9d32-91db1161b777": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.391002542s STEP: Saw pod success May 16 00:43:48.806: INFO: Pod "pod-configmaps-790c5763-cb86-4460-9d32-91db1161b777" satisfied condition "Succeeded or Failed" May 16 00:43:48.809: INFO: Trying to get logs from node latest-worker pod pod-configmaps-790c5763-cb86-4460-9d32-91db1161b777 container env-test: STEP: delete the pod May 16 00:43:48.945: INFO: Waiting for pod pod-configmaps-790c5763-cb86-4460-9d32-91db1161b777 to disappear May 16 00:43:48.948: INFO: Pod pod-configmaps-790c5763-cb86-4460-9d32-91db1161b777 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:43:48.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3330" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3244,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:43:48.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:43:55.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4781" for this suite. STEP: Destroying namespace "nsdeletetest-9729" for this suite. May 16 00:43:55.686: INFO: Namespace nsdeletetest-9729 was already deleted STEP: Destroying namespace "nsdeletetest-3743" for this suite. • [SLOW TEST:6.733 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":201,"skipped":3258,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:43:55.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 16 00:43:55.807: INFO: Waiting up to 1m0s for all nodes to be ready May 16 00:44:55.831: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:44:55.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 16 00:44:59.961: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:45:16.425: INFO: pods created so far: [1 1 1] May 16 00:45:16.425: INFO: length of pods created so far: 3 May 16 00:45:22.434: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:45:29.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8135" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:45:29.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9705" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:93.936 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":202,"skipped":3273,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:45:29.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 16 00:45:34.332: INFO: Successfully updated pod "labelsupdate708c0b0c-7cd2-4ae4-80e1-031c683e91d9" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:45:36.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9067" for this suite. • [SLOW TEST:6.753 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":203,"skipped":3292,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:45:36.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 16 00:45:36.878: INFO: Waiting up to 5m0s for pod "pod-cdac8aee-a7ea-4273-b19f-bd5cbc9471f0" in namespace "emptydir-4422" to be "Succeeded or Failed" May 16 00:45:36.930: INFO: Pod "pod-cdac8aee-a7ea-4273-b19f-bd5cbc9471f0": Phase="Pending", Reason="", readiness=false. Elapsed: 51.851357ms May 16 00:45:38.933: INFO: Pod "pod-cdac8aee-a7ea-4273-b19f-bd5cbc9471f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055418752s May 16 00:45:40.937: INFO: Pod "pod-cdac8aee-a7ea-4273-b19f-bd5cbc9471f0": Phase="Running", Reason="", readiness=true. Elapsed: 4.059283227s May 16 00:45:42.941: INFO: Pod "pod-cdac8aee-a7ea-4273-b19f-bd5cbc9471f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063705733s STEP: Saw pod success May 16 00:45:42.941: INFO: Pod "pod-cdac8aee-a7ea-4273-b19f-bd5cbc9471f0" satisfied condition "Succeeded or Failed" May 16 00:45:42.944: INFO: Trying to get logs from node latest-worker2 pod pod-cdac8aee-a7ea-4273-b19f-bd5cbc9471f0 container test-container: STEP: delete the pod May 16 00:45:42.974: INFO: Waiting for pod pod-cdac8aee-a7ea-4273-b19f-bd5cbc9471f0 to disappear May 16 00:45:43.009: INFO: Pod pod-cdac8aee-a7ea-4273-b19f-bd5cbc9471f0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:45:43.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4422" for this suite. • [SLOW TEST:6.639 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3299,"failed":0} [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:45:43.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 16 00:45:43.218: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:43.242: INFO: Number of nodes with available pods: 0 May 16 00:45:43.243: INFO: Node latest-worker is running more than one daemon pod May 16 00:45:44.256: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:44.260: INFO: Number of nodes with available pods: 0 May 16 00:45:44.260: INFO: Node latest-worker is running more than one daemon pod May 16 00:45:45.263: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:45.267: INFO: Number of nodes with available pods: 0 May 16 00:45:45.267: INFO: Node latest-worker is running more than one daemon pod May 16 00:45:46.248: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:46.251: INFO: Number of nodes with available pods: 0 May 16 00:45:46.251: INFO: Node latest-worker is running more than one daemon pod May 16 00:45:47.247: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:47.251: INFO: Number of nodes with available pods: 0 May 16 00:45:47.251: INFO: Node latest-worker is running more than one daemon pod May 16 00:45:48.248: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:48.251: INFO: Number of nodes with available pods: 2 May 16 00:45:48.251: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 16 00:45:48.291: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:48.294: INFO: Number of nodes with available pods: 1 May 16 00:45:48.294: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:49.300: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:49.303: INFO: Number of nodes with available pods: 1 May 16 00:45:49.303: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:50.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:50.308: INFO: Number of nodes with available pods: 1 May 16 00:45:50.308: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:51.298: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:51.302: INFO: Number of nodes with available pods: 1 May 16 00:45:51.302: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:52.298: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:52.300: INFO: Number of nodes with available pods: 1 May 16 00:45:52.300: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:53.327: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:53.330: INFO: Number of nodes with available pods: 1 May 16 00:45:53.330: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:54.299: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:54.321: INFO: Number of nodes with available pods: 1 May 16 00:45:54.321: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:55.316: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:55.320: INFO: Number of nodes with available pods: 1 May 16 00:45:55.320: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:56.334: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:56.339: INFO: Number of nodes with available pods: 1 May 16 00:45:56.339: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:57.300: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:57.303: INFO: Number of nodes with available pods: 1 May 16 00:45:57.303: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:58.300: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:58.303: INFO: Number of nodes with available pods: 1 May 16 00:45:58.303: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:45:59.300: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:45:59.303: INFO: Number of nodes with available pods: 2 May 16 00:45:59.303: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3922, will wait for the garbage collector to delete the pods May 16 00:45:59.366: INFO: Deleting DaemonSet.extensions daemon-set took: 6.895447ms May 16 00:45:59.466: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.256106ms May 16 00:46:05.512: INFO: Number of nodes with available pods: 0 May 16 00:46:05.512: INFO: Number of running nodes: 0, number of available pods: 0 May 16 00:46:05.515: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3922/daemonsets","resourceVersion":"5020359"},"items":null} May 16 00:46:05.517: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3922/pods","resourceVersion":"5020359"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:46:05.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3922" for this suite. • [SLOW TEST:22.513 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":205,"skipped":3299,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:46:05.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9779 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9779;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9779 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9779;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9779.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9779.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9779.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9779.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9779.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9779.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9779.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9779.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9779.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9779.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9779.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 128.178.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.178.128_udp@PTR;check="$$(dig +tcp +noall +answer +search 128.178.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.178.128_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9779 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9779;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9779 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9779;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9779.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9779.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9779.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9779.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9779.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9779.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9779.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9779.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9779.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9779.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9779.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9779.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 128.178.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.178.128_udp@PTR;check="$$(dig +tcp +noall +answer +search 128.178.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.178.128_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 00:46:11.907: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.910: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.913: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.916: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.919: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.921: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.924: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.927: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.946: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.948: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.951: INFO: Unable to read jessie_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.954: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.957: INFO: Unable to read jessie_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.960: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.962: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.965: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:11.978: INFO: Lookups using dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9779 wheezy_tcp@dns-test-service.dns-9779 wheezy_udp@dns-test-service.dns-9779.svc wheezy_tcp@dns-test-service.dns-9779.svc wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9779 jessie_tcp@dns-test-service.dns-9779 jessie_udp@dns-test-service.dns-9779.svc jessie_tcp@dns-test-service.dns-9779.svc jessie_udp@_http._tcp.dns-test-service.dns-9779.svc jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc] May 16 00:46:16.982: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:16.985: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:16.987: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:16.990: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:16.992: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:16.994: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:16.995: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:16.997: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:17.013: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:17.015: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:17.016: INFO: Unable to read jessie_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:17.018: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:17.020: INFO: Unable to read jessie_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:17.022: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:17.024: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:17.026: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:17.040: INFO: Lookups using dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9779 wheezy_tcp@dns-test-service.dns-9779 wheezy_udp@dns-test-service.dns-9779.svc wheezy_tcp@dns-test-service.dns-9779.svc wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9779 jessie_tcp@dns-test-service.dns-9779 jessie_udp@dns-test-service.dns-9779.svc jessie_tcp@dns-test-service.dns-9779.svc jessie_udp@_http._tcp.dns-test-service.dns-9779.svc jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc] May 16 00:46:21.984: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:21.988: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:21.991: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:21.994: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:21.997: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.000: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.004: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.007: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.026: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.029: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.033: INFO: Unable to read jessie_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.036: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.039: INFO: Unable to read jessie_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.042: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.045: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.048: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:22.075: INFO: Lookups using dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9779 wheezy_tcp@dns-test-service.dns-9779 wheezy_udp@dns-test-service.dns-9779.svc wheezy_tcp@dns-test-service.dns-9779.svc wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9779 jessie_tcp@dns-test-service.dns-9779 jessie_udp@dns-test-service.dns-9779.svc jessie_tcp@dns-test-service.dns-9779.svc jessie_udp@_http._tcp.dns-test-service.dns-9779.svc jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc] May 16 00:46:26.983: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:26.986: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:26.988: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:26.990: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:26.993: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:26.997: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.000: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.003: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.020: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.022: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.025: INFO: Unable to read jessie_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.027: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.030: INFO: Unable to read jessie_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.032: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.035: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.037: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:27.051: INFO: Lookups using dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9779 wheezy_tcp@dns-test-service.dns-9779 wheezy_udp@dns-test-service.dns-9779.svc wheezy_tcp@dns-test-service.dns-9779.svc wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9779 jessie_tcp@dns-test-service.dns-9779 jessie_udp@dns-test-service.dns-9779.svc jessie_tcp@dns-test-service.dns-9779.svc jessie_udp@_http._tcp.dns-test-service.dns-9779.svc jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc] May 16 00:46:31.984: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:31.988: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:31.991: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:31.995: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:31.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.002: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.006: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.010: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.031: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.034: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.038: INFO: Unable to read jessie_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.040: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.044: INFO: Unable to read jessie_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.048: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.056: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.059: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:32.102: INFO: Lookups using dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9779 wheezy_tcp@dns-test-service.dns-9779 wheezy_udp@dns-test-service.dns-9779.svc wheezy_tcp@dns-test-service.dns-9779.svc wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9779 jessie_tcp@dns-test-service.dns-9779 jessie_udp@dns-test-service.dns-9779.svc jessie_tcp@dns-test-service.dns-9779.svc jessie_udp@_http._tcp.dns-test-service.dns-9779.svc jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc] May 16 00:46:36.985: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:36.988: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:36.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:36.992: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:36.994: INFO: Unable to read wheezy_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:36.997: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:36.999: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.002: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.019: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.022: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.024: INFO: Unable to read jessie_udp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.027: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779 from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.030: INFO: Unable to read jessie_udp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.032: INFO: Unable to read jessie_tcp@dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.035: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.038: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc from pod dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0: the server could not find the requested resource (get pods dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0) May 16 00:46:37.055: INFO: Lookups using dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9779 wheezy_tcp@dns-test-service.dns-9779 wheezy_udp@dns-test-service.dns-9779.svc wheezy_tcp@dns-test-service.dns-9779.svc wheezy_udp@_http._tcp.dns-test-service.dns-9779.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9779.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9779 jessie_tcp@dns-test-service.dns-9779 jessie_udp@dns-test-service.dns-9779.svc jessie_tcp@dns-test-service.dns-9779.svc jessie_udp@_http._tcp.dns-test-service.dns-9779.svc jessie_tcp@_http._tcp.dns-test-service.dns-9779.svc] May 16 00:46:42.152: INFO: DNS probes using dns-9779/dns-test-92b7f215-22df-4c62-a659-f6b815d37bf0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:46:43.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9779" for this suite. • [SLOW TEST:38.179 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":206,"skipped":3318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:46:43.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:46:44.681: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:46:46.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186804, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186804, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186805, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186804, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:46:49.772: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:46:49.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2697-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:46:50.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7469" for this suite. STEP: Destroying namespace "webhook-7469-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.386 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":207,"skipped":3376,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:46:51.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 16 00:46:55.763: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ab3f1290-7722-4e7c-9fe5-82cfdfc18d09" May 16 00:46:55.763: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ab3f1290-7722-4e7c-9fe5-82cfdfc18d09" in namespace "pods-3841" to be "terminated due to deadline exceeded" May 16 00:46:55.896: INFO: Pod "pod-update-activedeadlineseconds-ab3f1290-7722-4e7c-9fe5-82cfdfc18d09": Phase="Running", Reason="", readiness=true. Elapsed: 132.499364ms May 16 00:46:57.900: INFO: Pod "pod-update-activedeadlineseconds-ab3f1290-7722-4e7c-9fe5-82cfdfc18d09": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.13685089s May 16 00:46:57.900: INFO: Pod "pod-update-activedeadlineseconds-ab3f1290-7722-4e7c-9fe5-82cfdfc18d09" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:46:57.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3841" for this suite. • [SLOW TEST:6.838 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":208,"skipped":3388,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:46:57.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-f603693f-697c-4e2e-8e4a-d7fe782e9ded STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:47:04.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6616" for this suite. • [SLOW TEST:6.120 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":209,"skipped":3395,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:47:04.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 16 00:47:10.187: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-134 PodName:pod-sharedvolume-64215434-18eb-4956-956a-ee0c3d600cb4 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 00:47:10.187: INFO: >>> kubeConfig: /root/.kube/config I0516 00:47:10.214518 7 log.go:172] (0xc00242ce70) (0xc00219eaa0) Create stream I0516 00:47:10.214546 7 log.go:172] (0xc00242ce70) (0xc00219eaa0) Stream added, broadcasting: 1 I0516 00:47:10.221853 7 log.go:172] (0xc00242ce70) Reply frame received for 1 I0516 00:47:10.221905 7 log.go:172] (0xc00242ce70) (0xc00201efa0) Create stream I0516 00:47:10.221922 7 log.go:172] (0xc00242ce70) (0xc00201efa0) Stream added, broadcasting: 3 I0516 00:47:10.222967 7 log.go:172] (0xc00242ce70) Reply frame received for 3 I0516 00:47:10.223013 7 log.go:172] (0xc00242ce70) (0xc00179a5a0) Create stream I0516 00:47:10.223042 7 log.go:172] (0xc00242ce70) (0xc00179a5a0) Stream added, broadcasting: 5 I0516 00:47:10.224569 7 log.go:172] (0xc00242ce70) Reply frame received for 5 I0516 00:47:10.290890 7 log.go:172] (0xc00242ce70) Data frame received for 3 I0516 00:47:10.290915 7 log.go:172] (0xc00201efa0) (3) Data frame handling I0516 00:47:10.290928 7 log.go:172] (0xc00201efa0) (3) Data frame sent I0516 00:47:10.290955 7 log.go:172] (0xc00242ce70) Data frame received for 3 I0516 00:47:10.290972 7 log.go:172] (0xc00201efa0) (3) Data frame handling I0516 00:47:10.291051 7 log.go:172] (0xc00242ce70) Data frame received for 5 I0516 00:47:10.291062 7 log.go:172] (0xc00179a5a0) (5) Data frame handling I0516 00:47:10.293002 7 log.go:172] (0xc00242ce70) Data frame received for 1 I0516 00:47:10.293026 7 log.go:172] (0xc00219eaa0) (1) Data frame handling I0516 00:47:10.293042 7 log.go:172] (0xc00219eaa0) (1) Data frame sent I0516 00:47:10.293066 7 log.go:172] (0xc00242ce70) (0xc00219eaa0) Stream removed, broadcasting: 1 I0516 00:47:10.293088 7 log.go:172] (0xc00242ce70) Go away received I0516 00:47:10.293499 7 log.go:172] (0xc00242ce70) (0xc00219eaa0) Stream removed, broadcasting: 1 I0516 00:47:10.293523 7 log.go:172] (0xc00242ce70) (0xc00201efa0) Stream removed, broadcasting: 3 I0516 00:47:10.293539 7 log.go:172] (0xc00242ce70) (0xc00179a5a0) Stream removed, broadcasting: 5 May 16 00:47:10.293: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:47:10.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-134" for this suite. • [SLOW TEST:6.242 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":210,"skipped":3404,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:47:10.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:47:11.025: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:47:13.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186831, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186831, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186831, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725186831, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:47:16.248: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:47:16.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5101-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:47:17.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4343" for this suite. STEP: Destroying namespace "webhook-4343-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.173 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":211,"skipped":3413,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:47:17.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-fc9744ce-bdd0-4573-b321-2ea86f505fd4 STEP: Creating a pod to test consume secrets May 16 00:47:17.576: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-870b3df7-cafb-4275-9e3c-208bc7a09407" in namespace "projected-4224" to be "Succeeded or Failed" May 16 00:47:17.856: INFO: Pod "pod-projected-secrets-870b3df7-cafb-4275-9e3c-208bc7a09407": Phase="Pending", Reason="", readiness=false. Elapsed: 280.288209ms May 16 00:47:19.860: INFO: Pod "pod-projected-secrets-870b3df7-cafb-4275-9e3c-208bc7a09407": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283875163s May 16 00:47:21.864: INFO: Pod "pod-projected-secrets-870b3df7-cafb-4275-9e3c-208bc7a09407": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.287841128s STEP: Saw pod success May 16 00:47:21.864: INFO: Pod "pod-projected-secrets-870b3df7-cafb-4275-9e3c-208bc7a09407" satisfied condition "Succeeded or Failed" May 16 00:47:21.866: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-870b3df7-cafb-4275-9e3c-208bc7a09407 container projected-secret-volume-test: STEP: delete the pod May 16 00:47:21.939: INFO: Waiting for pod pod-projected-secrets-870b3df7-cafb-4275-9e3c-208bc7a09407 to disappear May 16 00:47:21.956: INFO: Pod pod-projected-secrets-870b3df7-cafb-4275-9e3c-208bc7a09407 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:47:21.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4224" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":212,"skipped":3417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:47:21.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 16 00:47:22.129: INFO: Waiting up to 5m0s for pod "downward-api-6c2ffcff-5770-48a3-8181-8823ba38e4c1" in namespace "downward-api-2907" to be "Succeeded or Failed" May 16 00:47:22.136: INFO: Pod "downward-api-6c2ffcff-5770-48a3-8181-8823ba38e4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.271737ms May 16 00:47:24.141: INFO: Pod "downward-api-6c2ffcff-5770-48a3-8181-8823ba38e4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011689003s May 16 00:47:26.145: INFO: Pod "downward-api-6c2ffcff-5770-48a3-8181-8823ba38e4c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015987718s STEP: Saw pod success May 16 00:47:26.145: INFO: Pod "downward-api-6c2ffcff-5770-48a3-8181-8823ba38e4c1" satisfied condition "Succeeded or Failed" May 16 00:47:26.149: INFO: Trying to get logs from node latest-worker2 pod downward-api-6c2ffcff-5770-48a3-8181-8823ba38e4c1 container dapi-container: STEP: delete the pod May 16 00:47:26.182: INFO: Waiting for pod downward-api-6c2ffcff-5770-48a3-8181-8823ba38e4c1 to disappear May 16 00:47:26.197: INFO: Pod downward-api-6c2ffcff-5770-48a3-8181-8823ba38e4c1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:47:26.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2907" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":213,"skipped":3447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:47:26.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:47:30.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3204" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":214,"skipped":3480,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:47:30.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8433 May 16 00:47:34.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8433 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 16 00:47:38.963: INFO: stderr: "I0516 00:47:38.858468 3018 log.go:172] (0xc000674580) (0xc0007054a0) Create stream\nI0516 00:47:38.858519 3018 log.go:172] (0xc000674580) (0xc0007054a0) Stream added, broadcasting: 1\nI0516 00:47:38.860228 3018 log.go:172] (0xc000674580) Reply frame received for 1\nI0516 00:47:38.860283 3018 log.go:172] (0xc000674580) (0xc0006b0a00) Create stream\nI0516 00:47:38.860296 3018 log.go:172] (0xc000674580) (0xc0006b0a00) Stream added, broadcasting: 3\nI0516 00:47:38.861086 3018 log.go:172] (0xc000674580) Reply frame received for 3\nI0516 00:47:38.861279 3018 log.go:172] (0xc000674580) (0xc000705540) Create stream\nI0516 00:47:38.861294 3018 log.go:172] (0xc000674580) (0xc000705540) Stream added, broadcasting: 5\nI0516 00:47:38.862094 3018 log.go:172] (0xc000674580) Reply frame received for 5\nI0516 00:47:38.952856 3018 log.go:172] (0xc000674580) Data frame received for 5\nI0516 00:47:38.952905 3018 log.go:172] (0xc000705540) (5) Data frame handling\nI0516 00:47:38.952936 3018 log.go:172] (0xc000705540) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0516 00:47:38.956090 3018 log.go:172] (0xc000674580) Data frame received for 3\nI0516 00:47:38.956108 3018 log.go:172] (0xc0006b0a00) (3) Data frame handling\nI0516 00:47:38.956128 3018 log.go:172] (0xc0006b0a00) (3) Data frame sent\nI0516 00:47:38.956699 3018 log.go:172] (0xc000674580) Data frame received for 5\nI0516 00:47:38.956714 3018 log.go:172] (0xc000705540) (5) Data frame handling\nI0516 00:47:38.956797 3018 log.go:172] (0xc000674580) Data frame received for 3\nI0516 00:47:38.956811 3018 log.go:172] (0xc0006b0a00) (3) Data frame handling\nI0516 00:47:38.958593 3018 log.go:172] (0xc000674580) Data frame received for 1\nI0516 00:47:38.958610 3018 log.go:172] (0xc0007054a0) (1) Data frame handling\nI0516 00:47:38.958618 3018 log.go:172] (0xc0007054a0) (1) Data frame sent\nI0516 00:47:38.958649 3018 log.go:172] (0xc000674580) (0xc0007054a0) Stream removed, broadcasting: 1\nI0516 00:47:38.958670 3018 log.go:172] (0xc000674580) Go away received\nI0516 00:47:38.958971 3018 log.go:172] (0xc000674580) (0xc0007054a0) Stream removed, broadcasting: 1\nI0516 00:47:38.958984 3018 log.go:172] (0xc000674580) (0xc0006b0a00) Stream removed, broadcasting: 3\nI0516 00:47:38.958990 3018 log.go:172] (0xc000674580) (0xc000705540) Stream removed, broadcasting: 5\n" May 16 00:47:38.963: INFO: stdout: "iptables" May 16 00:47:38.963: INFO: proxyMode: iptables May 16 00:47:38.968: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 16 00:47:38.990: INFO: Pod kube-proxy-mode-detector still exists May 16 00:47:40.990: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 16 00:47:40.995: INFO: Pod kube-proxy-mode-detector still exists May 16 00:47:42.990: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 16 00:47:42.994: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-8433 STEP: creating replication controller affinity-nodeport-timeout in namespace services-8433 I0516 00:47:43.100723 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8433, replica count: 3 I0516 00:47:46.151105 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 00:47:49.151284 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 00:47:49.161: INFO: Creating new exec pod May 16 00:47:54.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8433 execpod-affinityhx7kb -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 16 00:47:54.454: INFO: stderr: "I0516 00:47:54.360275 3045 log.go:172] (0xc0007f0840) (0xc0003c5040) Create stream\nI0516 00:47:54.360335 3045 log.go:172] (0xc0007f0840) (0xc0003c5040) Stream added, broadcasting: 1\nI0516 00:47:54.365270 3045 log.go:172] (0xc0007f0840) Reply frame received for 1\nI0516 00:47:54.365316 3045 log.go:172] (0xc0007f0840) (0xc0000dd7c0) Create stream\nI0516 00:47:54.365330 3045 log.go:172] (0xc0007f0840) (0xc0000dd7c0) Stream added, broadcasting: 3\nI0516 00:47:54.366303 3045 log.go:172] (0xc0007f0840) Reply frame received for 3\nI0516 00:47:54.366335 3045 log.go:172] (0xc0007f0840) (0xc00023c820) Create stream\nI0516 00:47:54.366349 3045 log.go:172] (0xc0007f0840) (0xc00023c820) Stream added, broadcasting: 5\nI0516 00:47:54.367152 3045 log.go:172] (0xc0007f0840) Reply frame received for 5\nI0516 00:47:54.446316 3045 log.go:172] (0xc0007f0840) Data frame received for 3\nI0516 00:47:54.446369 3045 log.go:172] (0xc0000dd7c0) (3) Data frame handling\nI0516 00:47:54.446400 3045 log.go:172] (0xc0007f0840) Data frame received for 5\nI0516 00:47:54.446419 3045 log.go:172] (0xc00023c820) (5) Data frame handling\nI0516 00:47:54.446430 3045 log.go:172] (0xc00023c820) (5) Data frame sent\nI0516 00:47:54.446440 3045 log.go:172] (0xc0007f0840) Data frame received for 5\nI0516 00:47:54.446450 3045 log.go:172] (0xc00023c820) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0516 00:47:54.448495 3045 log.go:172] (0xc0007f0840) Data frame received for 1\nI0516 00:47:54.448531 3045 log.go:172] (0xc0003c5040) (1) Data frame handling\nI0516 00:47:54.448550 3045 log.go:172] (0xc0003c5040) (1) Data frame sent\nI0516 00:47:54.448575 3045 log.go:172] (0xc0007f0840) (0xc0003c5040) Stream removed, broadcasting: 1\nI0516 00:47:54.448607 3045 log.go:172] (0xc0007f0840) Go away received\nI0516 00:47:54.448894 3045 log.go:172] (0xc0007f0840) (0xc0003c5040) Stream removed, broadcasting: 1\nI0516 00:47:54.448911 3045 log.go:172] (0xc0007f0840) (0xc0000dd7c0) Stream removed, broadcasting: 3\nI0516 00:47:54.448919 3045 log.go:172] (0xc0007f0840) (0xc00023c820) Stream removed, broadcasting: 5\n" May 16 00:47:54.454: INFO: stdout: "" May 16 00:47:54.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8433 execpod-affinityhx7kb -- /bin/sh -x -c nc -zv -t -w 2 10.103.32.110 80' May 16 00:47:54.645: INFO: stderr: "I0516 00:47:54.584824 3067 log.go:172] (0xc00003a420) (0xc000630c80) Create stream\nI0516 00:47:54.584890 3067 log.go:172] (0xc00003a420) (0xc000630c80) Stream added, broadcasting: 1\nI0516 00:47:54.587410 3067 log.go:172] (0xc00003a420) Reply frame received for 1\nI0516 00:47:54.587459 3067 log.go:172] (0xc00003a420) (0xc0006414a0) Create stream\nI0516 00:47:54.587472 3067 log.go:172] (0xc00003a420) (0xc0006414a0) Stream added, broadcasting: 3\nI0516 00:47:54.588606 3067 log.go:172] (0xc00003a420) Reply frame received for 3\nI0516 00:47:54.588676 3067 log.go:172] (0xc00003a420) (0xc0005a0500) Create stream\nI0516 00:47:54.588709 3067 log.go:172] (0xc00003a420) (0xc0005a0500) Stream added, broadcasting: 5\nI0516 00:47:54.590248 3067 log.go:172] (0xc00003a420) Reply frame received for 5\nI0516 00:47:54.639063 3067 log.go:172] (0xc00003a420) Data frame received for 5\nI0516 00:47:54.639112 3067 log.go:172] (0xc0005a0500) (5) Data frame handling\nI0516 00:47:54.639130 3067 log.go:172] (0xc0005a0500) (5) Data frame sent\nI0516 00:47:54.639138 3067 log.go:172] (0xc00003a420) Data frame received for 5\nI0516 00:47:54.639145 3067 log.go:172] (0xc0005a0500) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.32.110 80\nConnection to 10.103.32.110 80 port [tcp/http] succeeded!\nI0516 00:47:54.639167 3067 log.go:172] (0xc00003a420) Data frame received for 3\nI0516 00:47:54.639175 3067 log.go:172] (0xc0006414a0) (3) Data frame handling\nI0516 00:47:54.640733 3067 log.go:172] (0xc00003a420) Data frame received for 1\nI0516 00:47:54.640746 3067 log.go:172] (0xc000630c80) (1) Data frame handling\nI0516 00:47:54.640756 3067 log.go:172] (0xc000630c80) (1) Data frame sent\nI0516 00:47:54.640764 3067 log.go:172] (0xc00003a420) (0xc000630c80) Stream removed, broadcasting: 1\nI0516 00:47:54.640855 3067 log.go:172] (0xc00003a420) Go away received\nI0516 00:47:54.641020 3067 log.go:172] (0xc00003a420) (0xc000630c80) Stream removed, broadcasting: 1\nI0516 00:47:54.641033 3067 log.go:172] (0xc00003a420) (0xc0006414a0) Stream removed, broadcasting: 3\nI0516 00:47:54.641038 3067 log.go:172] (0xc00003a420) (0xc0005a0500) Stream removed, broadcasting: 5\n" May 16 00:47:54.645: INFO: stdout: "" May 16 00:47:54.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8433 execpod-affinityhx7kb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30777' May 16 00:47:54.928: INFO: stderr: "I0516 00:47:54.780824 3090 log.go:172] (0xc0009bc000) (0xc000598aa0) Create stream\nI0516 00:47:54.780874 3090 log.go:172] (0xc0009bc000) (0xc000598aa0) Stream added, broadcasting: 1\nI0516 00:47:54.784977 3090 log.go:172] (0xc0009bc000) Reply frame received for 1\nI0516 00:47:54.785014 3090 log.go:172] (0xc0009bc000) (0xc0005bc640) Create stream\nI0516 00:47:54.785023 3090 log.go:172] (0xc0009bc000) (0xc0005bc640) Stream added, broadcasting: 3\nI0516 00:47:54.786066 3090 log.go:172] (0xc0009bc000) Reply frame received for 3\nI0516 00:47:54.786089 3090 log.go:172] (0xc0009bc000) (0xc000534aa0) Create stream\nI0516 00:47:54.786099 3090 log.go:172] (0xc0009bc000) (0xc000534aa0) Stream added, broadcasting: 5\nI0516 00:47:54.786920 3090 log.go:172] (0xc0009bc000) Reply frame received for 5\nI0516 00:47:54.924466 3090 log.go:172] (0xc0009bc000) Data frame received for 3\nI0516 00:47:54.924493 3090 log.go:172] (0xc0005bc640) (3) Data frame handling\nI0516 00:47:54.924505 3090 log.go:172] (0xc0009bc000) Data frame received for 5\nI0516 00:47:54.924509 3090 log.go:172] (0xc000534aa0) (5) Data frame handling\nI0516 00:47:54.924515 3090 log.go:172] (0xc000534aa0) (5) Data frame sent\nI0516 00:47:54.924519 3090 log.go:172] (0xc0009bc000) Data frame received for 5\nI0516 00:47:54.924522 3090 log.go:172] (0xc000534aa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30777\nConnection to 172.17.0.13 30777 port [tcp/30777] succeeded!\nI0516 00:47:54.925531 3090 log.go:172] (0xc0009bc000) Data frame received for 1\nI0516 00:47:54.925546 3090 log.go:172] (0xc000598aa0) (1) Data frame handling\nI0516 00:47:54.925554 3090 log.go:172] (0xc000598aa0) (1) Data frame sent\nI0516 00:47:54.925562 3090 log.go:172] (0xc0009bc000) (0xc000598aa0) Stream removed, broadcasting: 1\nI0516 00:47:54.925570 3090 log.go:172] (0xc0009bc000) Go away received\nI0516 00:47:54.925819 3090 log.go:172] (0xc0009bc000) (0xc000598aa0) Stream removed, broadcasting: 1\nI0516 00:47:54.925834 3090 log.go:172] (0xc0009bc000) (0xc0005bc640) Stream removed, broadcasting: 3\nI0516 00:47:54.925844 3090 log.go:172] (0xc0009bc000) (0xc000534aa0) Stream removed, broadcasting: 5\n" May 16 00:47:54.929: INFO: stdout: "" May 16 00:47:54.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8433 execpod-affinityhx7kb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30777' May 16 00:47:55.156: INFO: stderr: "I0516 00:47:55.058360 3110 log.go:172] (0xc000c5a000) (0xc0009de6e0) Create stream\nI0516 00:47:55.058410 3110 log.go:172] (0xc000c5a000) (0xc0009de6e0) Stream added, broadcasting: 1\nI0516 00:47:55.063134 3110 log.go:172] (0xc000c5a000) Reply frame received for 1\nI0516 00:47:55.063175 3110 log.go:172] (0xc000c5a000) (0xc000392280) Create stream\nI0516 00:47:55.063189 3110 log.go:172] (0xc000c5a000) (0xc000392280) Stream added, broadcasting: 3\nI0516 00:47:55.064320 3110 log.go:172] (0xc000c5a000) Reply frame received for 3\nI0516 00:47:55.064363 3110 log.go:172] (0xc000c5a000) (0xc0006730e0) Create stream\nI0516 00:47:55.064377 3110 log.go:172] (0xc000c5a000) (0xc0006730e0) Stream added, broadcasting: 5\nI0516 00:47:55.065578 3110 log.go:172] (0xc000c5a000) Reply frame received for 5\nI0516 00:47:55.148771 3110 log.go:172] (0xc000c5a000) Data frame received for 3\nI0516 00:47:55.148803 3110 log.go:172] (0xc000392280) (3) Data frame handling\nI0516 00:47:55.148837 3110 log.go:172] (0xc000c5a000) Data frame received for 5\nI0516 00:47:55.148850 3110 log.go:172] (0xc0006730e0) (5) Data frame handling\nI0516 00:47:55.148863 3110 log.go:172] (0xc0006730e0) (5) Data frame sent\nI0516 00:47:55.148874 3110 log.go:172] (0xc000c5a000) Data frame received for 5\nI0516 00:47:55.148879 3110 log.go:172] (0xc0006730e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30777\nConnection to 172.17.0.12 30777 port [tcp/30777] succeeded!\nI0516 00:47:55.150566 3110 log.go:172] (0xc000c5a000) Data frame received for 1\nI0516 00:47:55.150609 3110 log.go:172] (0xc0009de6e0) (1) Data frame handling\nI0516 00:47:55.150618 3110 log.go:172] (0xc0009de6e0) (1) Data frame sent\nI0516 00:47:55.150625 3110 log.go:172] (0xc000c5a000) (0xc0009de6e0) Stream removed, broadcasting: 1\nI0516 00:47:55.150634 3110 log.go:172] (0xc000c5a000) Go away received\nI0516 00:47:55.151005 3110 log.go:172] (0xc000c5a000) (0xc0009de6e0) Stream removed, broadcasting: 1\nI0516 00:47:55.151022 3110 log.go:172] (0xc000c5a000) (0xc000392280) Stream removed, broadcasting: 3\nI0516 00:47:55.151028 3110 log.go:172] (0xc000c5a000) (0xc0006730e0) Stream removed, broadcasting: 5\n" May 16 00:47:55.156: INFO: stdout: "" May 16 00:47:55.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8433 execpod-affinityhx7kb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30777/ ; done' May 16 00:47:55.472: INFO: stderr: "I0516 00:47:55.285808 3130 log.go:172] (0xc00003aa50) (0xc00018e640) Create stream\nI0516 00:47:55.285865 3130 log.go:172] (0xc00003aa50) (0xc00018e640) Stream added, broadcasting: 1\nI0516 00:47:55.292434 3130 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0516 00:47:55.292487 3130 log.go:172] (0xc00003aa50) (0xc0006c5220) Create stream\nI0516 00:47:55.292513 3130 log.go:172] (0xc00003aa50) (0xc0006c5220) Stream added, broadcasting: 3\nI0516 00:47:55.294097 3130 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0516 00:47:55.294150 3130 log.go:172] (0xc00003aa50) (0xc0006c5400) Create stream\nI0516 00:47:55.294176 3130 log.go:172] (0xc00003aa50) (0xc0006c5400) Stream added, broadcasting: 5\nI0516 00:47:55.296098 3130 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0516 00:47:55.363767 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.363810 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.363826 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.363843 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.363852 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.363862 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.369802 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.369824 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.369857 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.370340 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.370365 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.370380 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.370392 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.370425 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.370440 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.376339 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.376361 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.376383 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.376778 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.376796 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.376816 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.376888 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.376912 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.376924 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.384717 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.384731 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.384742 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.385565 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.385596 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.385636 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\nI0516 00:47:55.385666 3130 log.go:172] (0xc00003aa50) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/I0516 00:47:55.385679 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.385729 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n\nI0516 00:47:55.385760 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.385783 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.385812 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.390096 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.390116 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.390132 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.390402 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.390424 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.390442 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\nI0516 00:47:55.390460 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.390482 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\n+ echo\nI0516 00:47:55.390503 3130 log.go:172] (0xc00003aa50) Data frame received for 3\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.390520 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\nI0516 00:47:55.390537 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.390554 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.396325 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.396343 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.396354 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.397099 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.397353 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.397381 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.397413 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.397431 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.397442 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.404951 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.404966 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.404987 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.405952 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.405973 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.405984 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.405993 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.405998 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.406005 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.409753 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.409771 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.409779 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.410026 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.410044 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.410054 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.410158 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.410181 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.410198 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.415248 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.415271 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.415288 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.415818 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.415848 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.415867 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.415901 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.415910 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.415928 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.423551 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.423596 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.423636 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.424069 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.424089 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.424113 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.424143 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.424160 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.424178 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.430054 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.430076 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.430097 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.430645 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.430671 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.430685 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.430703 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.430711 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.430725 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.434830 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.434850 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.434883 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.435624 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.435654 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.435724 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.435746 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.435774 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.435793 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.442126 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.442142 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.442154 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.442794 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.442829 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.442844 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.442859 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.442869 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.442879 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.447380 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.447411 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.447426 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.447829 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.447869 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.447883 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.447898 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.447918 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.447939 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.453377 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.453405 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.453418 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.453433 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.453487 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.453509 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.458456 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.458474 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.458488 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.459590 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.459612 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.459642 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.459698 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.459729 3130 log.go:172] (0xc0006c5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.459766 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.464322 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.464336 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.464344 3130 log.go:172] (0xc0006c5220) (3) Data frame sent\nI0516 00:47:55.465476 3130 log.go:172] (0xc00003aa50) Data frame received for 5\nI0516 00:47:55.465504 3130 log.go:172] (0xc0006c5400) (5) Data frame handling\nI0516 00:47:55.465595 3130 log.go:172] (0xc00003aa50) Data frame received for 3\nI0516 00:47:55.465613 3130 log.go:172] (0xc0006c5220) (3) Data frame handling\nI0516 00:47:55.466892 3130 log.go:172] (0xc00003aa50) Data frame received for 1\nI0516 00:47:55.466919 3130 log.go:172] (0xc00018e640) (1) Data frame handling\nI0516 00:47:55.466932 3130 log.go:172] (0xc00018e640) (1) Data frame sent\nI0516 00:47:55.467166 3130 log.go:172] (0xc00003aa50) (0xc00018e640) Stream removed, broadcasting: 1\nI0516 00:47:55.467558 3130 log.go:172] (0xc00003aa50) (0xc00018e640) Stream removed, broadcasting: 1\nI0516 00:47:55.467586 3130 log.go:172] (0xc00003aa50) (0xc0006c5220) Stream removed, broadcasting: 3\nI0516 00:47:55.467605 3130 log.go:172] (0xc00003aa50) (0xc0006c5400) Stream removed, broadcasting: 5\n" May 16 00:47:55.473: INFO: stdout: "\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2\naffinity-nodeport-timeout-ln4v2" May 16 00:47:55.473: INFO: Received response from host: May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Received response from host: affinity-nodeport-timeout-ln4v2 May 16 00:47:55.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8433 execpod-affinityhx7kb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30777/' May 16 00:47:55.688: INFO: stderr: "I0516 00:47:55.612315 3150 log.go:172] (0xc00003a420) (0xc00052edc0) Create stream\nI0516 00:47:55.612392 3150 log.go:172] (0xc00003a420) (0xc00052edc0) Stream added, broadcasting: 1\nI0516 00:47:55.615289 3150 log.go:172] (0xc00003a420) Reply frame received for 1\nI0516 00:47:55.615339 3150 log.go:172] (0xc00003a420) (0xc000528500) Create stream\nI0516 00:47:55.615350 3150 log.go:172] (0xc00003a420) (0xc000528500) Stream added, broadcasting: 3\nI0516 00:47:55.616405 3150 log.go:172] (0xc00003a420) Reply frame received for 3\nI0516 00:47:55.616447 3150 log.go:172] (0xc00003a420) (0xc0004400a0) Create stream\nI0516 00:47:55.616463 3150 log.go:172] (0xc00003a420) (0xc0004400a0) Stream added, broadcasting: 5\nI0516 00:47:55.617686 3150 log.go:172] (0xc00003a420) Reply frame received for 5\nI0516 00:47:55.677800 3150 log.go:172] (0xc00003a420) Data frame received for 5\nI0516 00:47:55.677839 3150 log.go:172] (0xc0004400a0) (5) Data frame handling\nI0516 00:47:55.677862 3150 log.go:172] (0xc0004400a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:47:55.681887 3150 log.go:172] (0xc00003a420) Data frame received for 3\nI0516 00:47:55.681916 3150 log.go:172] (0xc000528500) (3) Data frame handling\nI0516 00:47:55.681941 3150 log.go:172] (0xc000528500) (3) Data frame sent\nI0516 00:47:55.682056 3150 log.go:172] (0xc00003a420) Data frame received for 3\nI0516 00:47:55.682093 3150 log.go:172] (0xc000528500) (3) Data frame handling\nI0516 00:47:55.682122 3150 log.go:172] (0xc00003a420) Data frame received for 5\nI0516 00:47:55.682149 3150 log.go:172] (0xc0004400a0) (5) Data frame handling\nI0516 00:47:55.684154 3150 log.go:172] (0xc00003a420) Data frame received for 1\nI0516 00:47:55.684207 3150 log.go:172] (0xc00052edc0) (1) Data frame handling\nI0516 00:47:55.684230 3150 log.go:172] (0xc00052edc0) (1) Data frame sent\nI0516 00:47:55.684247 3150 log.go:172] (0xc00003a420) (0xc00052edc0) Stream removed, broadcasting: 1\nI0516 00:47:55.684277 3150 log.go:172] (0xc00003a420) Go away received\nI0516 00:47:55.684525 3150 log.go:172] (0xc00003a420) (0xc00052edc0) Stream removed, broadcasting: 1\nI0516 00:47:55.684555 3150 log.go:172] (0xc00003a420) (0xc000528500) Stream removed, broadcasting: 3\nI0516 00:47:55.684567 3150 log.go:172] (0xc00003a420) (0xc0004400a0) Stream removed, broadcasting: 5\n" May 16 00:47:55.688: INFO: stdout: "affinity-nodeport-timeout-ln4v2" May 16 00:48:10.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8433 execpod-affinityhx7kb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30777/' May 16 00:48:10.909: INFO: stderr: "I0516 00:48:10.824210 3169 log.go:172] (0xc00099c000) (0xc0004252c0) Create stream\nI0516 00:48:10.824302 3169 log.go:172] (0xc00099c000) (0xc0004252c0) Stream added, broadcasting: 1\nI0516 00:48:10.827150 3169 log.go:172] (0xc00099c000) Reply frame received for 1\nI0516 00:48:10.827184 3169 log.go:172] (0xc00099c000) (0xc0004d0be0) Create stream\nI0516 00:48:10.827193 3169 log.go:172] (0xc00099c000) (0xc0004d0be0) Stream added, broadcasting: 3\nI0516 00:48:10.828092 3169 log.go:172] (0xc00099c000) Reply frame received for 3\nI0516 00:48:10.828129 3169 log.go:172] (0xc00099c000) (0xc0004259a0) Create stream\nI0516 00:48:10.828142 3169 log.go:172] (0xc00099c000) (0xc0004259a0) Stream added, broadcasting: 5\nI0516 00:48:10.829015 3169 log.go:172] (0xc00099c000) Reply frame received for 5\nI0516 00:48:10.898064 3169 log.go:172] (0xc00099c000) Data frame received for 5\nI0516 00:48:10.898093 3169 log.go:172] (0xc0004259a0) (5) Data frame handling\nI0516 00:48:10.898110 3169 log.go:172] (0xc0004259a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:48:10.901027 3169 log.go:172] (0xc00099c000) Data frame received for 3\nI0516 00:48:10.901054 3169 log.go:172] (0xc0004d0be0) (3) Data frame handling\nI0516 00:48:10.901083 3169 log.go:172] (0xc0004d0be0) (3) Data frame sent\nI0516 00:48:10.901959 3169 log.go:172] (0xc00099c000) Data frame received for 5\nI0516 00:48:10.901987 3169 log.go:172] (0xc0004259a0) (5) Data frame handling\nI0516 00:48:10.902015 3169 log.go:172] (0xc00099c000) Data frame received for 3\nI0516 00:48:10.902030 3169 log.go:172] (0xc0004d0be0) (3) Data frame handling\nI0516 00:48:10.903372 3169 log.go:172] (0xc00099c000) Data frame received for 1\nI0516 00:48:10.903445 3169 log.go:172] (0xc0004252c0) (1) Data frame handling\nI0516 00:48:10.903461 3169 log.go:172] (0xc0004252c0) (1) Data frame sent\nI0516 00:48:10.903474 3169 log.go:172] (0xc00099c000) (0xc0004252c0) Stream removed, broadcasting: 1\nI0516 00:48:10.903490 3169 log.go:172] (0xc00099c000) Go away received\nI0516 00:48:10.904216 3169 log.go:172] (0xc00099c000) (0xc0004252c0) Stream removed, broadcasting: 1\nI0516 00:48:10.904264 3169 log.go:172] (0xc00099c000) (0xc0004d0be0) Stream removed, broadcasting: 3\nI0516 00:48:10.904284 3169 log.go:172] (0xc00099c000) (0xc0004259a0) Stream removed, broadcasting: 5\n" May 16 00:48:10.909: INFO: stdout: "affinity-nodeport-timeout-ln4v2" May 16 00:48:25.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8433 execpod-affinityhx7kb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30777/' May 16 00:48:26.140: INFO: stderr: "I0516 00:48:26.039212 3189 log.go:172] (0xc000a76fd0) (0xc000a24280) Create stream\nI0516 00:48:26.039264 3189 log.go:172] (0xc000a76fd0) (0xc000a24280) Stream added, broadcasting: 1\nI0516 00:48:26.043798 3189 log.go:172] (0xc000a76fd0) Reply frame received for 1\nI0516 00:48:26.043845 3189 log.go:172] (0xc000a76fd0) (0xc0005401e0) Create stream\nI0516 00:48:26.043861 3189 log.go:172] (0xc000a76fd0) (0xc0005401e0) Stream added, broadcasting: 3\nI0516 00:48:26.044504 3189 log.go:172] (0xc000a76fd0) Reply frame received for 3\nI0516 00:48:26.044537 3189 log.go:172] (0xc000a76fd0) (0xc0004d41e0) Create stream\nI0516 00:48:26.044547 3189 log.go:172] (0xc000a76fd0) (0xc0004d41e0) Stream added, broadcasting: 5\nI0516 00:48:26.045540 3189 log.go:172] (0xc000a76fd0) Reply frame received for 5\nI0516 00:48:26.128813 3189 log.go:172] (0xc000a76fd0) Data frame received for 5\nI0516 00:48:26.128841 3189 log.go:172] (0xc0004d41e0) (5) Data frame handling\nI0516 00:48:26.128859 3189 log.go:172] (0xc0004d41e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30777/\nI0516 00:48:26.136343 3189 log.go:172] (0xc000a76fd0) Data frame received for 3\nI0516 00:48:26.136364 3189 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0516 00:48:26.136379 3189 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0516 00:48:26.137416 3189 log.go:172] (0xc000a76fd0) Data frame received for 3\nI0516 00:48:26.137431 3189 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0516 00:48:26.137513 3189 log.go:172] (0xc000a76fd0) Data frame received for 5\nI0516 00:48:26.137531 3189 log.go:172] (0xc0004d41e0) (5) Data frame handling\nI0516 00:48:26.138586 3189 log.go:172] (0xc000a76fd0) Data frame received for 1\nI0516 00:48:26.138596 3189 log.go:172] (0xc000a24280) (1) Data frame handling\nI0516 00:48:26.138607 3189 log.go:172] (0xc000a24280) (1) Data frame sent\nI0516 00:48:26.138663 3189 log.go:172] (0xc000a76fd0) (0xc000a24280) Stream removed, broadcasting: 1\nI0516 00:48:26.138726 3189 log.go:172] (0xc000a76fd0) Go away received\nI0516 00:48:26.138845 3189 log.go:172] (0xc000a76fd0) (0xc000a24280) Stream removed, broadcasting: 1\nI0516 00:48:26.138854 3189 log.go:172] (0xc000a76fd0) (0xc0005401e0) Stream removed, broadcasting: 3\nI0516 00:48:26.138860 3189 log.go:172] (0xc000a76fd0) (0xc0004d41e0) Stream removed, broadcasting: 5\n" May 16 00:48:26.141: INFO: stdout: "affinity-nodeport-timeout-lxkf2" May 16 00:48:26.141: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-8433, will wait for the garbage collector to delete the pods May 16 00:48:26.264: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.631009ms May 16 00:48:26.764: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.247947ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:48:35.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8433" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:65.033 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":215,"skipped":3496,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:48:35.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-0bf838e6-2cf1-48f2-969a-305070a5b9a0 STEP: Creating a pod to test consume secrets May 16 00:48:35.483: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-495d2f32-854d-4aaf-9cf2-bb50519bdaeb" in namespace "projected-490" to be "Succeeded or Failed" May 16 00:48:35.538: INFO: Pod "pod-projected-secrets-495d2f32-854d-4aaf-9cf2-bb50519bdaeb": Phase="Pending", Reason="", readiness=false. Elapsed: 55.084782ms May 16 00:48:37.543: INFO: Pod "pod-projected-secrets-495d2f32-854d-4aaf-9cf2-bb50519bdaeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060554994s May 16 00:48:39.547: INFO: Pod "pod-projected-secrets-495d2f32-854d-4aaf-9cf2-bb50519bdaeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06420511s STEP: Saw pod success May 16 00:48:39.547: INFO: Pod "pod-projected-secrets-495d2f32-854d-4aaf-9cf2-bb50519bdaeb" satisfied condition "Succeeded or Failed" May 16 00:48:39.549: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-495d2f32-854d-4aaf-9cf2-bb50519bdaeb container projected-secret-volume-test: STEP: delete the pod May 16 00:48:39.580: INFO: Waiting for pod pod-projected-secrets-495d2f32-854d-4aaf-9cf2-bb50519bdaeb to disappear May 16 00:48:39.594: INFO: Pod pod-projected-secrets-495d2f32-854d-4aaf-9cf2-bb50519bdaeb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:48:39.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-490" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":216,"skipped":3497,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:48:39.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 16 00:48:44.213: INFO: Successfully updated pod "pod-update-b8fb4e44-ecba-4f5f-9082-7e7580c4b5e6" STEP: verifying the updated pod is in kubernetes May 16 00:48:44.235: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:48:44.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1522" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":217,"skipped":3517,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:48:44.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-aa14b89f-bc73-451c-8758-4f1ca336e830 STEP: Creating a pod to test consume secrets May 16 00:48:44.350: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91045054-b135-441d-ab28-ecbc0ae1c7e2" in namespace "projected-4717" to be "Succeeded or Failed" May 16 00:48:44.367: INFO: Pod "pod-projected-secrets-91045054-b135-441d-ab28-ecbc0ae1c7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.02168ms May 16 00:48:46.370: INFO: Pod "pod-projected-secrets-91045054-b135-441d-ab28-ecbc0ae1c7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020271678s May 16 00:48:48.374: INFO: Pod "pod-projected-secrets-91045054-b135-441d-ab28-ecbc0ae1c7e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023474236s STEP: Saw pod success May 16 00:48:48.374: INFO: Pod "pod-projected-secrets-91045054-b135-441d-ab28-ecbc0ae1c7e2" satisfied condition "Succeeded or Failed" May 16 00:48:48.376: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-91045054-b135-441d-ab28-ecbc0ae1c7e2 container projected-secret-volume-test: STEP: delete the pod May 16 00:48:48.407: INFO: Waiting for pod pod-projected-secrets-91045054-b135-441d-ab28-ecbc0ae1c7e2 to disappear May 16 00:48:48.442: INFO: Pod pod-projected-secrets-91045054-b135-441d-ab28-ecbc0ae1c7e2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:48:48.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4717" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":218,"skipped":3519,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:48:48.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:48:48.571: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:48:54.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7352" for this suite. • [SLOW TEST:6.302 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":219,"skipped":3540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:48:54.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 00:48:54.827: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 16 00:48:54.880: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:48:54.882: INFO: Number of nodes with available pods: 0 May 16 00:48:54.882: INFO: Node latest-worker is running more than one daemon pod May 16 00:48:55.887: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:48:55.891: INFO: Number of nodes with available pods: 0 May 16 00:48:55.891: INFO: Node latest-worker is running more than one daemon pod May 16 00:48:56.920: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:48:56.926: INFO: Number of nodes with available pods: 0 May 16 00:48:56.926: INFO: Node latest-worker is running more than one daemon pod May 16 00:48:57.887: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:48:57.891: INFO: Number of nodes with available pods: 0 May 16 00:48:57.891: INFO: Node latest-worker is running more than one daemon pod May 16 00:48:58.887: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:48:58.916: INFO: Number of nodes with available pods: 1 May 16 00:48:58.916: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:48:59.914: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:48:59.934: INFO: Number of nodes with available pods: 2 May 16 00:48:59.934: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 16 00:49:00.066: INFO: Wrong image for pod: daemon-set-k2m8c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:00.066: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:00.070: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:01.075: INFO: Wrong image for pod: daemon-set-k2m8c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:01.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:01.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:02.075: INFO: Wrong image for pod: daemon-set-k2m8c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:02.075: INFO: Pod daemon-set-k2m8c is not available May 16 00:49:02.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:02.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:03.075: INFO: Pod daemon-set-flx68 is not available May 16 00:49:03.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:03.082: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:04.075: INFO: Pod daemon-set-flx68 is not available May 16 00:49:04.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:04.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:05.180: INFO: Pod daemon-set-flx68 is not available May 16 00:49:05.180: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:05.185: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:06.131: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:06.135: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:07.125: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:07.128: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:08.083: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:08.083: INFO: Pod daemon-set-pwsfx is not available May 16 00:49:08.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:09.095: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:09.095: INFO: Pod daemon-set-pwsfx is not available May 16 00:49:09.098: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:10.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:10.075: INFO: Pod daemon-set-pwsfx is not available May 16 00:49:10.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:11.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:11.075: INFO: Pod daemon-set-pwsfx is not available May 16 00:49:11.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:12.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:12.075: INFO: Pod daemon-set-pwsfx is not available May 16 00:49:12.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:13.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:13.075: INFO: Pod daemon-set-pwsfx is not available May 16 00:49:13.078: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:14.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:14.075: INFO: Pod daemon-set-pwsfx is not available May 16 00:49:14.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:15.075: INFO: Wrong image for pod: daemon-set-pwsfx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 16 00:49:15.075: INFO: Pod daemon-set-pwsfx is not available May 16 00:49:15.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:16.075: INFO: Pod daemon-set-vrps5 is not available May 16 00:49:16.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 16 00:49:16.083: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:16.085: INFO: Number of nodes with available pods: 1 May 16 00:49:16.085: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:49:17.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:17.094: INFO: Number of nodes with available pods: 1 May 16 00:49:17.094: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:49:18.174: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:18.177: INFO: Number of nodes with available pods: 1 May 16 00:49:18.177: INFO: Node latest-worker2 is running more than one daemon pod May 16 00:49:19.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 00:49:19.093: INFO: Number of nodes with available pods: 2 May 16 00:49:19.093: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1635, will wait for the garbage collector to delete the pods May 16 00:49:19.164: INFO: Deleting DaemonSet.extensions daemon-set took: 6.204322ms May 16 00:49:19.464: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.214595ms May 16 00:49:35.272: INFO: Number of nodes with available pods: 0 May 16 00:49:35.272: INFO: Number of running nodes: 0, number of available pods: 0 May 16 00:49:35.274: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1635/daemonsets","resourceVersion":"5021845"},"items":null} May 16 00:49:35.276: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1635/pods","resourceVersion":"5021845"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:49:35.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1635" for this suite. • [SLOW TEST:40.540 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":220,"skipped":3567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:49:35.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 16 00:49:35.335: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:49:41.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7613" for this suite. • [SLOW TEST:5.788 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":221,"skipped":3598,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:49:41.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-c8b4e185-ca05-4b75-9c05-3df3023c12d4 in namespace container-probe-3528 May 16 00:49:45.226: INFO: Started pod busybox-c8b4e185-ca05-4b75-9c05-3df3023c12d4 in namespace container-probe-3528 STEP: checking the pod's current state and verifying that restartCount is present May 16 00:49:45.227: INFO: Initial restart count of pod busybox-c8b4e185-ca05-4b75-9c05-3df3023c12d4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:53:45.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3528" for this suite. • [SLOW TEST:244.757 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":222,"skipped":3627,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:53:45.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 00:53:45.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54ae471d-4785-49a0-827d-02dc3ba5ed01" in namespace "projected-8967" to be "Succeeded or Failed" May 16 00:53:45.938: INFO: Pod "downwardapi-volume-54ae471d-4785-49a0-827d-02dc3ba5ed01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.385451ms May 16 00:53:47.941: INFO: Pod "downwardapi-volume-54ae471d-4785-49a0-827d-02dc3ba5ed01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007302874s May 16 00:53:49.945: INFO: Pod "downwardapi-volume-54ae471d-4785-49a0-827d-02dc3ba5ed01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010916722s May 16 00:53:51.949: INFO: Pod "downwardapi-volume-54ae471d-4785-49a0-827d-02dc3ba5ed01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015373291s STEP: Saw pod success May 16 00:53:51.949: INFO: Pod "downwardapi-volume-54ae471d-4785-49a0-827d-02dc3ba5ed01" satisfied condition "Succeeded or Failed" May 16 00:53:51.953: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-54ae471d-4785-49a0-827d-02dc3ba5ed01 container client-container: STEP: delete the pod May 16 00:53:52.026: INFO: Waiting for pod downwardapi-volume-54ae471d-4785-49a0-827d-02dc3ba5ed01 to disappear May 16 00:53:52.034: INFO: Pod downwardapi-volume-54ae471d-4785-49a0-827d-02dc3ba5ed01 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:53:52.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8967" for this suite. • [SLOW TEST:6.202 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":223,"skipped":3636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:53:52.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 16 00:53:52.154: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:54:04.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2651" for this suite. • [SLOW TEST:12.812 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":224,"skipped":3661,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:54:04.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:54:05.639: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:54:07.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187245, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187245, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187245, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187245, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 00:54:09.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187245, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187245, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187245, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187245, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:54:12.871: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 16 00:54:16.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-661 to-be-attached-pod -i -c=container1' May 16 00:54:17.066: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:54:17.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-661" for this suite. STEP: Destroying namespace "webhook-661-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.316 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":225,"skipped":3669,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:54:17.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-34af3827-ec84-4fa4-9c48-d6f548a50e28 STEP: Creating a pod to test consume configMaps May 16 00:54:17.267: INFO: Waiting up to 5m0s for pod "pod-configmaps-19e472f2-fafe-4d5a-a81b-2d3cbc977e74" in namespace "configmap-9101" to be "Succeeded or Failed" May 16 00:54:17.300: INFO: Pod "pod-configmaps-19e472f2-fafe-4d5a-a81b-2d3cbc977e74": Phase="Pending", Reason="", readiness=false. Elapsed: 32.62602ms May 16 00:54:19.303: INFO: Pod "pod-configmaps-19e472f2-fafe-4d5a-a81b-2d3cbc977e74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03635889s May 16 00:54:21.308: INFO: Pod "pod-configmaps-19e472f2-fafe-4d5a-a81b-2d3cbc977e74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040910263s STEP: Saw pod success May 16 00:54:21.308: INFO: Pod "pod-configmaps-19e472f2-fafe-4d5a-a81b-2d3cbc977e74" satisfied condition "Succeeded or Failed" May 16 00:54:21.311: INFO: Trying to get logs from node latest-worker pod pod-configmaps-19e472f2-fafe-4d5a-a81b-2d3cbc977e74 container configmap-volume-test: STEP: delete the pod May 16 00:54:21.330: INFO: Waiting for pod pod-configmaps-19e472f2-fafe-4d5a-a81b-2d3cbc977e74 to disappear May 16 00:54:21.349: INFO: Pod pod-configmaps-19e472f2-fafe-4d5a-a81b-2d3cbc977e74 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:54:21.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9101" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":226,"skipped":3673,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:54:21.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-6089cf99-c015-4c99-86a3-26486fc3deef STEP: Creating a pod to test consume configMaps May 16 00:54:21.467: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7536bf2-9137-42d4-b34d-5e8690782ca8" in namespace "projected-2258" to be "Succeeded or Failed" May 16 00:54:21.472: INFO: Pod "pod-projected-configmaps-f7536bf2-9137-42d4-b34d-5e8690782ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390782ms May 16 00:54:23.487: INFO: Pod "pod-projected-configmaps-f7536bf2-9137-42d4-b34d-5e8690782ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019154387s May 16 00:54:25.576: INFO: Pod "pod-projected-configmaps-f7536bf2-9137-42d4-b34d-5e8690782ca8": Phase="Running", Reason="", readiness=true. Elapsed: 4.108805587s May 16 00:54:27.579: INFO: Pod "pod-projected-configmaps-f7536bf2-9137-42d4-b34d-5e8690782ca8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111860007s STEP: Saw pod success May 16 00:54:27.579: INFO: Pod "pod-projected-configmaps-f7536bf2-9137-42d4-b34d-5e8690782ca8" satisfied condition "Succeeded or Failed" May 16 00:54:27.581: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f7536bf2-9137-42d4-b34d-5e8690782ca8 container projected-configmap-volume-test: STEP: delete the pod May 16 00:54:27.638: INFO: Waiting for pod pod-projected-configmaps-f7536bf2-9137-42d4-b34d-5e8690782ca8 to disappear May 16 00:54:27.655: INFO: Pod pod-projected-configmaps-f7536bf2-9137-42d4-b34d-5e8690782ca8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:54:27.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2258" for this suite. • [SLOW TEST:6.302 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":227,"skipped":3676,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:54:27.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-d0c35d39-ed3c-4e82-862e-f1dd02de3f01 STEP: Creating a pod to test consume configMaps May 16 00:54:27.776: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3680c08b-e66f-4729-a446-b596f7a78b05" in namespace "projected-494" to be "Succeeded or Failed" May 16 00:54:27.780: INFO: Pod "pod-projected-configmaps-3680c08b-e66f-4729-a446-b596f7a78b05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.791061ms May 16 00:54:29.784: INFO: Pod "pod-projected-configmaps-3680c08b-e66f-4729-a446-b596f7a78b05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008345771s May 16 00:54:31.790: INFO: Pod "pod-projected-configmaps-3680c08b-e66f-4729-a446-b596f7a78b05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014096204s STEP: Saw pod success May 16 00:54:31.790: INFO: Pod "pod-projected-configmaps-3680c08b-e66f-4729-a446-b596f7a78b05" satisfied condition "Succeeded or Failed" May 16 00:54:31.792: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3680c08b-e66f-4729-a446-b596f7a78b05 container projected-configmap-volume-test: STEP: delete the pod May 16 00:54:31.838: INFO: Waiting for pod pod-projected-configmaps-3680c08b-e66f-4729-a446-b596f7a78b05 to disappear May 16 00:54:32.061: INFO: Pod pod-projected-configmaps-3680c08b-e66f-4729-a446-b596f7a78b05 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:54:32.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-494" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3682,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:54:32.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 16 00:54:32.131: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 00:54:32.152: INFO: Waiting for terminating namespaces to be deleted... May 16 00:54:32.155: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 16 00:54:32.160: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 16 00:54:32.160: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 16 00:54:32.160: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 16 00:54:32.160: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 16 00:54:32.160: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 00:54:32.160: INFO: Container kindnet-cni ready: true, restart count 0 May 16 00:54:32.160: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 00:54:32.160: INFO: Container kube-proxy ready: true, restart count 0 May 16 00:54:32.160: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 16 00:54:32.202: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 16 00:54:32.202: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 16 00:54:32.202: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 16 00:54:32.202: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 16 00:54:32.202: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 00:54:32.202: INFO: Container kindnet-cni ready: true, restart count 0 May 16 00:54:32.202: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 00:54:32.202: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3bf73ef7-e8ba-41c0-bf90-af6192e4f68e 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-3bf73ef7-e8ba-41c0-bf90-af6192e4f68e off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3bf73ef7-e8ba-41c0-bf90-af6192e4f68e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:59:42.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2911" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:310.332 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":229,"skipped":3683,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:59:42.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 00:59:42.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7aefdf0-2332-4a65-8354-4607a3280f8d" in namespace "downward-api-6871" to be "Succeeded or Failed" May 16 00:59:42.536: INFO: Pod "downwardapi-volume-a7aefdf0-2332-4a65-8354-4607a3280f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 48.239533ms May 16 00:59:44.693: INFO: Pod "downwardapi-volume-a7aefdf0-2332-4a65-8354-4607a3280f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204703549s May 16 00:59:46.697: INFO: Pod "downwardapi-volume-a7aefdf0-2332-4a65-8354-4607a3280f8d": Phase="Running", Reason="", readiness=true. Elapsed: 4.208902777s May 16 00:59:48.701: INFO: Pod "downwardapi-volume-a7aefdf0-2332-4a65-8354-4607a3280f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212802339s STEP: Saw pod success May 16 00:59:48.701: INFO: Pod "downwardapi-volume-a7aefdf0-2332-4a65-8354-4607a3280f8d" satisfied condition "Succeeded or Failed" May 16 00:59:48.704: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a7aefdf0-2332-4a65-8354-4607a3280f8d container client-container: STEP: delete the pod May 16 00:59:48.740: INFO: Waiting for pod downwardapi-volume-a7aefdf0-2332-4a65-8354-4607a3280f8d to disappear May 16 00:59:48.750: INFO: Pod downwardapi-volume-a7aefdf0-2332-4a65-8354-4607a3280f8d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:59:48.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6871" for this suite. • [SLOW TEST:6.420 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":230,"skipped":3719,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:59:48.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 00:59:49.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 00:59:51.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187589, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187589, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187589, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187589, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 00:59:54.674: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 00:59:54.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6628" for this suite. STEP: Destroying namespace "webhook-6628-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.984 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":231,"skipped":3740,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 00:59:54.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 01:00:01.010: INFO: Waiting up to 5m0s for pod "client-envvars-c726b2aa-3a71-4e92-b445-d3c95b9bdd70" in namespace "pods-2517" to be "Succeeded or Failed" May 16 01:00:01.064: INFO: Pod "client-envvars-c726b2aa-3a71-4e92-b445-d3c95b9bdd70": Phase="Pending", Reason="", readiness=false. Elapsed: 54.124252ms May 16 01:00:03.118: INFO: Pod "client-envvars-c726b2aa-3a71-4e92-b445-d3c95b9bdd70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108298151s May 16 01:00:05.122: INFO: Pod "client-envvars-c726b2aa-3a71-4e92-b445-d3c95b9bdd70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112503678s STEP: Saw pod success May 16 01:00:05.122: INFO: Pod "client-envvars-c726b2aa-3a71-4e92-b445-d3c95b9bdd70" satisfied condition "Succeeded or Failed" May 16 01:00:05.125: INFO: Trying to get logs from node latest-worker pod client-envvars-c726b2aa-3a71-4e92-b445-d3c95b9bdd70 container env3cont: STEP: delete the pod May 16 01:00:05.455: INFO: Waiting for pod client-envvars-c726b2aa-3a71-4e92-b445-d3c95b9bdd70 to disappear May 16 01:00:05.470: INFO: Pod client-envvars-c726b2aa-3a71-4e92-b445-d3c95b9bdd70 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:00:05.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2517" for this suite. • [SLOW TEST:10.671 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3768,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:00:05.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8658 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 01:00:05.553: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 16 01:00:05.653: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 01:00:07.777: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 01:00:09.657: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 01:00:11.741: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:00:13.658: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:00:15.658: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:00:17.658: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:00:19.657: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:00:21.657: INFO: The status of Pod netserver-0 is Running (Ready = true) May 16 01:00:21.662: INFO: The status of Pod netserver-1 is Running (Ready = false) May 16 01:00:23.666: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 16 01:00:27.694: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.4:8080/dial?request=hostname&protocol=udp&host=10.244.1.202&port=8081&tries=1'] Namespace:pod-network-test-8658 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 01:00:27.694: INFO: >>> kubeConfig: /root/.kube/config I0516 01:00:27.727397 7 log.go:172] (0xc00256e370) (0xc001e97d60) Create stream I0516 01:00:27.727429 7 log.go:172] (0xc00256e370) (0xc001e97d60) Stream added, broadcasting: 1 I0516 01:00:27.729394 7 log.go:172] (0xc00256e370) Reply frame received for 1 I0516 01:00:27.729424 7 log.go:172] (0xc00256e370) (0xc00151fd60) Create stream I0516 01:00:27.729433 7 log.go:172] (0xc00256e370) (0xc00151fd60) Stream added, broadcasting: 3 I0516 01:00:27.730309 7 log.go:172] (0xc00256e370) Reply frame received for 3 I0516 01:00:27.730353 7 log.go:172] (0xc00256e370) (0xc001e97e00) Create stream I0516 01:00:27.730364 7 log.go:172] (0xc00256e370) (0xc001e97e00) Stream added, broadcasting: 5 I0516 01:00:27.731124 7 log.go:172] (0xc00256e370) Reply frame received for 5 I0516 01:00:27.803819 7 log.go:172] (0xc00256e370) Data frame received for 3 I0516 01:00:27.803867 7 log.go:172] (0xc00151fd60) (3) Data frame handling I0516 01:00:27.803906 7 log.go:172] (0xc00151fd60) (3) Data frame sent I0516 01:00:27.804668 7 log.go:172] (0xc00256e370) Data frame received for 3 I0516 01:00:27.804724 7 log.go:172] (0xc00151fd60) (3) Data frame handling I0516 01:00:27.804758 7 log.go:172] (0xc00256e370) Data frame received for 5 I0516 01:00:27.804777 7 log.go:172] (0xc001e97e00) (5) Data frame handling I0516 01:00:27.806811 7 log.go:172] (0xc00256e370) Data frame received for 1 I0516 01:00:27.806893 7 log.go:172] (0xc001e97d60) (1) Data frame handling I0516 01:00:27.806914 7 log.go:172] (0xc001e97d60) (1) Data frame sent I0516 01:00:27.806939 7 log.go:172] (0xc00256e370) (0xc001e97d60) Stream removed, broadcasting: 1 I0516 01:00:27.806962 7 log.go:172] (0xc00256e370) Go away received I0516 01:00:27.807103 7 log.go:172] (0xc00256e370) (0xc001e97d60) Stream removed, broadcasting: 1 I0516 01:00:27.807145 7 log.go:172] (0xc00256e370) (0xc00151fd60) Stream removed, broadcasting: 3 I0516 01:00:27.807173 7 log.go:172] (0xc00256e370) (0xc001e97e00) Stream removed, broadcasting: 5 May 16 01:00:27.807: INFO: Waiting for responses: map[] May 16 01:00:27.811: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.4:8080/dial?request=hostname&protocol=udp&host=10.244.2.3&port=8081&tries=1'] Namespace:pod-network-test-8658 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 01:00:27.811: INFO: >>> kubeConfig: /root/.kube/config I0516 01:00:27.836342 7 log.go:172] (0xc0028506e0) (0xc001dae6e0) Create stream I0516 01:00:27.836367 7 log.go:172] (0xc0028506e0) (0xc001dae6e0) Stream added, broadcasting: 1 I0516 01:00:27.838946 7 log.go:172] (0xc0028506e0) Reply frame received for 1 I0516 01:00:27.838976 7 log.go:172] (0xc0028506e0) (0xc001e97f40) Create stream I0516 01:00:27.838993 7 log.go:172] (0xc0028506e0) (0xc001e97f40) Stream added, broadcasting: 3 I0516 01:00:27.840060 7 log.go:172] (0xc0028506e0) Reply frame received for 3 I0516 01:00:27.840121 7 log.go:172] (0xc0028506e0) (0xc001868000) Create stream I0516 01:00:27.840137 7 log.go:172] (0xc0028506e0) (0xc001868000) Stream added, broadcasting: 5 I0516 01:00:27.840999 7 log.go:172] (0xc0028506e0) Reply frame received for 5 I0516 01:00:27.916536 7 log.go:172] (0xc0028506e0) Data frame received for 3 I0516 01:00:27.916580 7 log.go:172] (0xc001e97f40) (3) Data frame handling I0516 01:00:27.916610 7 log.go:172] (0xc001e97f40) (3) Data frame sent I0516 01:00:27.916935 7 log.go:172] (0xc0028506e0) Data frame received for 5 I0516 01:00:27.916957 7 log.go:172] (0xc001868000) (5) Data frame handling I0516 01:00:27.916980 7 log.go:172] (0xc0028506e0) Data frame received for 3 I0516 01:00:27.917014 7 log.go:172] (0xc001e97f40) (3) Data frame handling I0516 01:00:27.918730 7 log.go:172] (0xc0028506e0) Data frame received for 1 I0516 01:00:27.918758 7 log.go:172] (0xc001dae6e0) (1) Data frame handling I0516 01:00:27.918795 7 log.go:172] (0xc001dae6e0) (1) Data frame sent I0516 01:00:27.918819 7 log.go:172] (0xc0028506e0) (0xc001dae6e0) Stream removed, broadcasting: 1 I0516 01:00:27.918845 7 log.go:172] (0xc0028506e0) Go away received I0516 01:00:27.918960 7 log.go:172] (0xc0028506e0) (0xc001dae6e0) Stream removed, broadcasting: 1 I0516 01:00:27.919004 7 log.go:172] (0xc0028506e0) (0xc001e97f40) Stream removed, broadcasting: 3 I0516 01:00:27.919036 7 log.go:172] (0xc0028506e0) (0xc001868000) Stream removed, broadcasting: 5 May 16 01:00:27.919: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:00:27.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8658" for this suite. • [SLOW TEST:22.445 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":233,"skipped":3773,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:00:27.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 16 01:00:28.011: INFO: Waiting up to 5m0s for pod "var-expansion-552b300b-52af-43ce-939d-bb6b786ced37" in namespace "var-expansion-7544" to be "Succeeded or Failed" May 16 01:00:28.016: INFO: Pod "var-expansion-552b300b-52af-43ce-939d-bb6b786ced37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.639008ms May 16 01:00:30.022: INFO: Pod "var-expansion-552b300b-52af-43ce-939d-bb6b786ced37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010723877s May 16 01:00:32.026: INFO: Pod "var-expansion-552b300b-52af-43ce-939d-bb6b786ced37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015046165s STEP: Saw pod success May 16 01:00:32.026: INFO: Pod "var-expansion-552b300b-52af-43ce-939d-bb6b786ced37" satisfied condition "Succeeded or Failed" May 16 01:00:32.029: INFO: Trying to get logs from node latest-worker pod var-expansion-552b300b-52af-43ce-939d-bb6b786ced37 container dapi-container: STEP: delete the pod May 16 01:00:32.070: INFO: Waiting for pod var-expansion-552b300b-52af-43ce-939d-bb6b786ced37 to disappear May 16 01:00:32.075: INFO: Pod var-expansion-552b300b-52af-43ce-939d-bb6b786ced37 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:00:32.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7544" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":234,"skipped":3775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:00:32.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 16 01:00:32.159: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 01:00:32.186: INFO: Waiting for terminating namespaces to be deleted... May 16 01:00:32.189: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 16 01:00:32.193: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 16 01:00:32.193: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 16 01:00:32.193: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 16 01:00:32.193: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 16 01:00:32.193: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 01:00:32.194: INFO: Container kindnet-cni ready: true, restart count 0 May 16 01:00:32.194: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 01:00:32.194: INFO: Container kube-proxy ready: true, restart count 0 May 16 01:00:32.194: INFO: netserver-0 from pod-network-test-8658 started at 2020-05-16 01:00:05 +0000 UTC (1 container statuses recorded) May 16 01:00:32.194: INFO: Container webserver ready: true, restart count 0 May 16 01:00:32.194: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 16 01:00:32.198: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 16 01:00:32.198: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 16 01:00:32.198: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 16 01:00:32.198: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 16 01:00:32.198: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 01:00:32.198: INFO: Container kindnet-cni ready: true, restart count 0 May 16 01:00:32.198: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 01:00:32.198: INFO: Container kube-proxy ready: true, restart count 0 May 16 01:00:32.198: INFO: netserver-1 from pod-network-test-8658 started at 2020-05-16 01:00:05 +0000 UTC (1 container statuses recorded) May 16 01:00:32.198: INFO: Container webserver ready: true, restart count 0 May 16 01:00:32.198: INFO: test-container-pod from pod-network-test-8658 started at 2020-05-16 01:00:23 +0000 UTC (1 container statuses recorded) May 16 01:00:32.198: INFO: Container webserver ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4685d817-dbd2-4b21-af2f-5158e59bb84a 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-4685d817-dbd2-4b21-af2f-5158e59bb84a off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-4685d817-dbd2-4b21-af2f-5158e59bb84a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:00:50.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4037" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.419 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":235,"skipped":3845,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:00:50.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-edfb46ee-1c11-4d40-826c-27ba9637456b STEP: Creating secret with name s-test-opt-upd-9e2cab04-933a-4f6d-88bf-53c30d82a854 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-edfb46ee-1c11-4d40-826c-27ba9637456b STEP: Updating secret s-test-opt-upd-9e2cab04-933a-4f6d-88bf-53c30d82a854 STEP: Creating secret with name s-test-opt-create-6cc35a82-6f99-420f-ac01-25cc2e8cbc34 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:00:58.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8055" for this suite. • [SLOW TEST:8.381 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3875,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:00:58.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:01:03.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8079" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3891,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:01:03.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:01:03.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7354" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":238,"skipped":3918,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:01:03.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 16 01:01:03.865: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 16 01:01:05.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187663, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187663, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187663, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187663, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 01:01:07.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187663, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187663, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187663, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725187663, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 01:01:10.960: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 01:01:11.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:01:12.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1395" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.140 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":239,"skipped":3937,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:01:12.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 01:01:12.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82178a58-b6c8-49ba-b884-f28b3a5d4f8a" in namespace "projected-766" to be "Succeeded or Failed" May 16 01:01:12.566: INFO: Pod "downwardapi-volume-82178a58-b6c8-49ba-b884-f28b3a5d4f8a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.383418ms May 16 01:01:14.569: INFO: Pod "downwardapi-volume-82178a58-b6c8-49ba-b884-f28b3a5d4f8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024158155s May 16 01:01:16.574: INFO: Pod "downwardapi-volume-82178a58-b6c8-49ba-b884-f28b3a5d4f8a": Phase="Running", Reason="", readiness=true. Elapsed: 4.028937244s May 16 01:01:18.598: INFO: Pod "downwardapi-volume-82178a58-b6c8-49ba-b884-f28b3a5d4f8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052473469s STEP: Saw pod success May 16 01:01:18.598: INFO: Pod "downwardapi-volume-82178a58-b6c8-49ba-b884-f28b3a5d4f8a" satisfied condition "Succeeded or Failed" May 16 01:01:18.601: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-82178a58-b6c8-49ba-b884-f28b3a5d4f8a container client-container: STEP: delete the pod May 16 01:01:18.646: INFO: Waiting for pod downwardapi-volume-82178a58-b6c8-49ba-b884-f28b3a5d4f8a to disappear May 16 01:01:18.658: INFO: Pod downwardapi-volume-82178a58-b6c8-49ba-b884-f28b3a5d4f8a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:01:18.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-766" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":3947,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:01:18.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3875 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 01:01:18.732: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 16 01:01:18.794: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 01:01:20.880: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 01:01:22.799: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:01:24.798: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:01:26.831: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:01:28.802: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:01:30.798: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:01:32.799: INFO: The status of Pod netserver-0 is Running (Ready = true) May 16 01:01:32.805: INFO: The status of Pod netserver-1 is Running (Ready = false) May 16 01:01:34.810: INFO: The status of Pod netserver-1 is Running (Ready = false) May 16 01:01:36.808: INFO: The status of Pod netserver-1 is Running (Ready = false) May 16 01:01:38.809: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 16 01:01:42.986: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.12:8080/dial?request=hostname&protocol=http&host=10.244.1.207&port=8080&tries=1'] Namespace:pod-network-test-3875 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 01:01:42.986: INFO: >>> kubeConfig: /root/.kube/config I0516 01:01:43.022655 7 log.go:172] (0xc00242d6b0) (0xc001870140) Create stream I0516 01:01:43.022693 7 log.go:172] (0xc00242d6b0) (0xc001870140) Stream added, broadcasting: 1 I0516 01:01:43.024675 7 log.go:172] (0xc00242d6b0) Reply frame received for 1 I0516 01:01:43.024716 7 log.go:172] (0xc00242d6b0) (0xc001c79900) Create stream I0516 01:01:43.024730 7 log.go:172] (0xc00242d6b0) (0xc001c79900) Stream added, broadcasting: 3 I0516 01:01:43.025966 7 log.go:172] (0xc00242d6b0) Reply frame received for 3 I0516 01:01:43.026006 7 log.go:172] (0xc00242d6b0) (0xc0018701e0) Create stream I0516 01:01:43.026020 7 log.go:172] (0xc00242d6b0) (0xc0018701e0) Stream added, broadcasting: 5 I0516 01:01:43.026999 7 log.go:172] (0xc00242d6b0) Reply frame received for 5 I0516 01:01:43.124387 7 log.go:172] (0xc00242d6b0) Data frame received for 3 I0516 01:01:43.124450 7 log.go:172] (0xc001c79900) (3) Data frame handling I0516 01:01:43.124478 7 log.go:172] (0xc001c79900) (3) Data frame sent I0516 01:01:43.124985 7 log.go:172] (0xc00242d6b0) Data frame received for 5 I0516 01:01:43.125021 7 log.go:172] (0xc0018701e0) (5) Data frame handling I0516 01:01:43.125046 7 log.go:172] (0xc00242d6b0) Data frame received for 3 I0516 01:01:43.125058 7 log.go:172] (0xc001c79900) (3) Data frame handling I0516 01:01:43.127255 7 log.go:172] (0xc00242d6b0) Data frame received for 1 I0516 01:01:43.127337 7 log.go:172] (0xc001870140) (1) Data frame handling I0516 01:01:43.127366 7 log.go:172] (0xc001870140) (1) Data frame sent I0516 01:01:43.127389 7 log.go:172] (0xc00242d6b0) (0xc001870140) Stream removed, broadcasting: 1 I0516 01:01:43.127422 7 log.go:172] (0xc00242d6b0) Go away received I0516 01:01:43.127499 7 log.go:172] (0xc00242d6b0) (0xc001870140) Stream removed, broadcasting: 1 I0516 01:01:43.127525 7 log.go:172] (0xc00242d6b0) (0xc001c79900) Stream removed, broadcasting: 3 I0516 01:01:43.127533 7 log.go:172] (0xc00242d6b0) (0xc0018701e0) Stream removed, broadcasting: 5 May 16 01:01:43.127: INFO: Waiting for responses: map[] May 16 01:01:43.144: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.12:8080/dial?request=hostname&protocol=http&host=10.244.2.11&port=8080&tries=1'] Namespace:pod-network-test-3875 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 01:01:43.144: INFO: >>> kubeConfig: /root/.kube/config I0516 01:01:43.194963 7 log.go:172] (0xc002eea2c0) (0xc001c02960) Create stream I0516 01:01:43.194990 7 log.go:172] (0xc002eea2c0) (0xc001c02960) Stream added, broadcasting: 1 I0516 01:01:43.196513 7 log.go:172] (0xc002eea2c0) Reply frame received for 1 I0516 01:01:43.196569 7 log.go:172] (0xc002eea2c0) (0xc001870320) Create stream I0516 01:01:43.196587 7 log.go:172] (0xc002eea2c0) (0xc001870320) Stream added, broadcasting: 3 I0516 01:01:43.197543 7 log.go:172] (0xc002eea2c0) Reply frame received for 3 I0516 01:01:43.197580 7 log.go:172] (0xc002eea2c0) (0xc00151fae0) Create stream I0516 01:01:43.197592 7 log.go:172] (0xc002eea2c0) (0xc00151fae0) Stream added, broadcasting: 5 I0516 01:01:43.198326 7 log.go:172] (0xc002eea2c0) Reply frame received for 5 I0516 01:01:43.268102 7 log.go:172] (0xc002eea2c0) Data frame received for 3 I0516 01:01:43.268135 7 log.go:172] (0xc001870320) (3) Data frame handling I0516 01:01:43.268161 7 log.go:172] (0xc001870320) (3) Data frame sent I0516 01:01:43.268636 7 log.go:172] (0xc002eea2c0) Data frame received for 5 I0516 01:01:43.268659 7 log.go:172] (0xc00151fae0) (5) Data frame handling I0516 01:01:43.268680 7 log.go:172] (0xc002eea2c0) Data frame received for 3 I0516 01:01:43.268697 7 log.go:172] (0xc001870320) (3) Data frame handling I0516 01:01:43.270238 7 log.go:172] (0xc002eea2c0) Data frame received for 1 I0516 01:01:43.270256 7 log.go:172] (0xc001c02960) (1) Data frame handling I0516 01:01:43.270283 7 log.go:172] (0xc001c02960) (1) Data frame sent I0516 01:01:43.270303 7 log.go:172] (0xc002eea2c0) (0xc001c02960) Stream removed, broadcasting: 1 I0516 01:01:43.270320 7 log.go:172] (0xc002eea2c0) Go away received I0516 01:01:43.270479 7 log.go:172] (0xc002eea2c0) (0xc001c02960) Stream removed, broadcasting: 1 I0516 01:01:43.270505 7 log.go:172] (0xc002eea2c0) (0xc001870320) Stream removed, broadcasting: 3 I0516 01:01:43.270514 7 log.go:172] (0xc002eea2c0) (0xc00151fae0) Stream removed, broadcasting: 5 May 16 01:01:43.270: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:01:43.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3875" for this suite. • [SLOW TEST:24.612 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":241,"skipped":3966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:01:43.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-97437efc-94f4-4727-a8f4-f685142f8545 STEP: Creating a pod to test consume configMaps May 16 01:01:43.416: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ba38f19-d925-4112-8c72-778ba5c06cb5" in namespace "projected-4767" to be "Succeeded or Failed" May 16 01:01:43.478: INFO: Pod "pod-projected-configmaps-2ba38f19-d925-4112-8c72-778ba5c06cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 62.051207ms May 16 01:01:45.556: INFO: Pod "pod-projected-configmaps-2ba38f19-d925-4112-8c72-778ba5c06cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139806786s May 16 01:01:47.560: INFO: Pod "pod-projected-configmaps-2ba38f19-d925-4112-8c72-778ba5c06cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144443188s STEP: Saw pod success May 16 01:01:47.560: INFO: Pod "pod-projected-configmaps-2ba38f19-d925-4112-8c72-778ba5c06cb5" satisfied condition "Succeeded or Failed" May 16 01:01:47.563: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-2ba38f19-d925-4112-8c72-778ba5c06cb5 container projected-configmap-volume-test: STEP: delete the pod May 16 01:01:47.744: INFO: Waiting for pod pod-projected-configmaps-2ba38f19-d925-4112-8c72-778ba5c06cb5 to disappear May 16 01:01:47.815: INFO: Pod pod-projected-configmaps-2ba38f19-d925-4112-8c72-778ba5c06cb5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:01:47.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4767" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":242,"skipped":3995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:01:47.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-27ce48f3-24ee-4ba3-b619-f6c83b1e7271 STEP: Creating a pod to test consume configMaps May 16 01:01:48.042: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b14a35fd-8ac2-4209-acfc-3b3a649e987c" in namespace "projected-6856" to be "Succeeded or Failed" May 16 01:01:48.072: INFO: Pod "pod-projected-configmaps-b14a35fd-8ac2-4209-acfc-3b3a649e987c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.803498ms May 16 01:01:50.221: INFO: Pod "pod-projected-configmaps-b14a35fd-8ac2-4209-acfc-3b3a649e987c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179168218s May 16 01:01:52.225: INFO: Pod "pod-projected-configmaps-b14a35fd-8ac2-4209-acfc-3b3a649e987c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183425946s May 16 01:01:54.230: INFO: Pod "pod-projected-configmaps-b14a35fd-8ac2-4209-acfc-3b3a649e987c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187995894s STEP: Saw pod success May 16 01:01:54.230: INFO: Pod "pod-projected-configmaps-b14a35fd-8ac2-4209-acfc-3b3a649e987c" satisfied condition "Succeeded or Failed" May 16 01:01:54.232: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b14a35fd-8ac2-4209-acfc-3b3a649e987c container projected-configmap-volume-test: STEP: delete the pod May 16 01:01:54.391: INFO: Waiting for pod pod-projected-configmaps-b14a35fd-8ac2-4209-acfc-3b3a649e987c to disappear May 16 01:01:54.432: INFO: Pod pod-projected-configmaps-b14a35fd-8ac2-4209-acfc-3b3a649e987c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:01:54.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6856" for this suite. • [SLOW TEST:6.552 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":4043,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:01:54.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:01:58.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9193" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":244,"skipped":4053,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:01:58.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 16 01:01:59.273: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4303 /api/v1/namespaces/watch-4303/configmaps/e2e-watch-test-label-changed ba2f174c-a17c-4c41-a771-8f24a35df813 5025517 0 2020-05-16 01:01:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 01:01:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:01:59.273: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4303 /api/v1/namespaces/watch-4303/configmaps/e2e-watch-test-label-changed ba2f174c-a17c-4c41-a771-8f24a35df813 5025518 0 2020-05-16 01:01:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 01:01:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:01:59.274: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4303 /api/v1/namespaces/watch-4303/configmaps/e2e-watch-test-label-changed ba2f174c-a17c-4c41-a771-8f24a35df813 5025519 0 2020-05-16 01:01:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 01:01:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 16 01:02:09.339: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4303 /api/v1/namespaces/watch-4303/configmaps/e2e-watch-test-label-changed ba2f174c-a17c-4c41-a771-8f24a35df813 5025579 0 2020-05-16 01:01:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:02:09.339: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4303 /api/v1/namespaces/watch-4303/configmaps/e2e-watch-test-label-changed ba2f174c-a17c-4c41-a771-8f24a35df813 5025580 0 2020-05-16 01:01:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:02:09.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4303 /api/v1/namespaces/watch-4303/configmaps/e2e-watch-test-label-changed ba2f174c-a17c-4c41-a771-8f24a35df813 5025581 0 2020-05-16 01:01:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:02:09.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4303" for this suite. • [SLOW TEST:10.457 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":245,"skipped":4054,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:02:09.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 16 01:02:09.411: INFO: Waiting up to 5m0s for pod "pod-710b7272-de3c-41c7-bd21-742285e5f608" in namespace "emptydir-1723" to be "Succeeded or Failed" May 16 01:02:09.425: INFO: Pod "pod-710b7272-de3c-41c7-bd21-742285e5f608": Phase="Pending", Reason="", readiness=false. Elapsed: 14.302778ms May 16 01:02:11.761: INFO: Pod "pod-710b7272-de3c-41c7-bd21-742285e5f608": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350549205s May 16 01:02:13.772: INFO: Pod "pod-710b7272-de3c-41c7-bd21-742285e5f608": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.360866082s STEP: Saw pod success May 16 01:02:13.772: INFO: Pod "pod-710b7272-de3c-41c7-bd21-742285e5f608" satisfied condition "Succeeded or Failed" May 16 01:02:13.775: INFO: Trying to get logs from node latest-worker2 pod pod-710b7272-de3c-41c7-bd21-742285e5f608 container test-container: STEP: delete the pod May 16 01:02:13.857: INFO: Waiting for pod pod-710b7272-de3c-41c7-bd21-742285e5f608 to disappear May 16 01:02:13.897: INFO: Pod pod-710b7272-de3c-41c7-bd21-742285e5f608 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:02:13.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1723" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":246,"skipped":4075,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:02:13.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 16 01:02:13.995: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-a b034583b-93e8-4a51-8dbf-d120f1978bc8 5025620 0 2020-05-16 01:02:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:02:13.996: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-a b034583b-93e8-4a51-8dbf-d120f1978bc8 5025620 0 2020-05-16 01:02:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 16 01:02:24.005: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-a b034583b-93e8-4a51-8dbf-d120f1978bc8 5025668 0 2020-05-16 01:02:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:02:24.006: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-a b034583b-93e8-4a51-8dbf-d120f1978bc8 5025668 0 2020-05-16 01:02:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 16 01:02:34.015: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-a b034583b-93e8-4a51-8dbf-d120f1978bc8 5025708 0 2020-05-16 01:02:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:02:34.015: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-a b034583b-93e8-4a51-8dbf-d120f1978bc8 5025708 0 2020-05-16 01:02:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 16 01:02:44.023: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-a b034583b-93e8-4a51-8dbf-d120f1978bc8 5025747 0 2020-05-16 01:02:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:02:44.024: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-a b034583b-93e8-4a51-8dbf-d120f1978bc8 5025747 0 2020-05-16 01:02:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 16 01:02:54.066: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-b c8039797-56b2-4649-8b2b-d04721284e60 5025779 0 2020-05-16 01:02:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:02:54.067: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-b c8039797-56b2-4649-8b2b-d04721284e60 5025779 0 2020-05-16 01:02:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 16 01:03:04.197: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-b c8039797-56b2-4649-8b2b-d04721284e60 5025818 0 2020-05-16 01:02:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 16 01:03:04.197: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1152 /api/v1/namespaces/watch-1152/configmaps/e2e-watch-test-configmap-b c8039797-56b2-4649-8b2b-d04721284e60 5025818 0 2020-05-16 01:02:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-16 01:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:03:14.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1152" for this suite. • [SLOW TEST:60.302 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":247,"skipped":4108,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:03:14.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 16 01:03:18.340: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-5449 PodName:var-expansion-c11b3274-e818-40e0-86a8-016163bb9850 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 01:03:18.340: INFO: >>> kubeConfig: /root/.kube/config I0516 01:03:18.373008 7 log.go:172] (0xc00256e2c0) (0xc001d7e820) Create stream I0516 01:03:18.373077 7 log.go:172] (0xc00256e2c0) (0xc001d7e820) Stream added, broadcasting: 1 I0516 01:03:18.374792 7 log.go:172] (0xc00256e2c0) Reply frame received for 1 I0516 01:03:18.374820 7 log.go:172] (0xc00256e2c0) (0xc001d7e960) Create stream I0516 01:03:18.374827 7 log.go:172] (0xc00256e2c0) (0xc001d7e960) Stream added, broadcasting: 3 I0516 01:03:18.375856 7 log.go:172] (0xc00256e2c0) Reply frame received for 3 I0516 01:03:18.375892 7 log.go:172] (0xc00256e2c0) (0xc002585180) Create stream I0516 01:03:18.375908 7 log.go:172] (0xc00256e2c0) (0xc002585180) Stream added, broadcasting: 5 I0516 01:03:18.376778 7 log.go:172] (0xc00256e2c0) Reply frame received for 5 I0516 01:03:18.431896 7 log.go:172] (0xc00256e2c0) Data frame received for 3 I0516 01:03:18.431917 7 log.go:172] (0xc001d7e960) (3) Data frame handling I0516 01:03:18.431930 7 log.go:172] (0xc00256e2c0) Data frame received for 5 I0516 01:03:18.431937 7 log.go:172] (0xc002585180) (5) Data frame handling I0516 01:03:18.433288 7 log.go:172] (0xc00256e2c0) Data frame received for 1 I0516 01:03:18.433308 7 log.go:172] (0xc001d7e820) (1) Data frame handling I0516 01:03:18.433319 7 log.go:172] (0xc001d7e820) (1) Data frame sent I0516 01:03:18.433336 7 log.go:172] (0xc00256e2c0) (0xc001d7e820) Stream removed, broadcasting: 1 I0516 01:03:18.433398 7 log.go:172] (0xc00256e2c0) Go away received I0516 01:03:18.433432 7 log.go:172] (0xc00256e2c0) (0xc001d7e820) Stream removed, broadcasting: 1 I0516 01:03:18.433452 7 log.go:172] (0xc00256e2c0) (0xc001d7e960) Stream removed, broadcasting: 3 I0516 01:03:18.433463 7 log.go:172] (0xc00256e2c0) (0xc002585180) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 16 01:03:18.436: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-5449 PodName:var-expansion-c11b3274-e818-40e0-86a8-016163bb9850 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 01:03:18.436: INFO: >>> kubeConfig: /root/.kube/config I0516 01:03:18.468582 7 log.go:172] (0xc00242cf20) (0xc002585c20) Create stream I0516 01:03:18.468608 7 log.go:172] (0xc00242cf20) (0xc002585c20) Stream added, broadcasting: 1 I0516 01:03:18.469902 7 log.go:172] (0xc00242cf20) Reply frame received for 1 I0516 01:03:18.469925 7 log.go:172] (0xc00242cf20) (0xc002585e00) Create stream I0516 01:03:18.469934 7 log.go:172] (0xc00242cf20) (0xc002585e00) Stream added, broadcasting: 3 I0516 01:03:18.470510 7 log.go:172] (0xc00242cf20) Reply frame received for 3 I0516 01:03:18.470547 7 log.go:172] (0xc00242cf20) (0xc00179a0a0) Create stream I0516 01:03:18.470560 7 log.go:172] (0xc00242cf20) (0xc00179a0a0) Stream added, broadcasting: 5 I0516 01:03:18.471197 7 log.go:172] (0xc00242cf20) Reply frame received for 5 I0516 01:03:18.519776 7 log.go:172] (0xc00242cf20) Data frame received for 3 I0516 01:03:18.519817 7 log.go:172] (0xc002585e00) (3) Data frame handling I0516 01:03:18.519910 7 log.go:172] (0xc00242cf20) Data frame received for 5 I0516 01:03:18.519930 7 log.go:172] (0xc00179a0a0) (5) Data frame handling I0516 01:03:18.521570 7 log.go:172] (0xc00242cf20) Data frame received for 1 I0516 01:03:18.521609 7 log.go:172] (0xc002585c20) (1) Data frame handling I0516 01:03:18.521659 7 log.go:172] (0xc002585c20) (1) Data frame sent I0516 01:03:18.521693 7 log.go:172] (0xc00242cf20) (0xc002585c20) Stream removed, broadcasting: 1 I0516 01:03:18.521724 7 log.go:172] (0xc00242cf20) Go away received I0516 01:03:18.521774 7 log.go:172] (0xc00242cf20) (0xc002585c20) Stream removed, broadcasting: 1 I0516 01:03:18.521798 7 log.go:172] (0xc00242cf20) (0xc002585e00) Stream removed, broadcasting: 3 I0516 01:03:18.521808 7 log.go:172] (0xc00242cf20) (0xc00179a0a0) Stream removed, broadcasting: 5 STEP: updating the annotation value May 16 01:03:19.030: INFO: Successfully updated pod "var-expansion-c11b3274-e818-40e0-86a8-016163bb9850" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 16 01:03:19.049: INFO: Deleting pod "var-expansion-c11b3274-e818-40e0-86a8-016163bb9850" in namespace "var-expansion-5449" May 16 01:03:19.052: INFO: Wait up to 5m0s for pod "var-expansion-c11b3274-e818-40e0-86a8-016163bb9850" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:03:55.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5449" for this suite. • [SLOW TEST:40.877 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":248,"skipped":4126,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:03:55.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 16 01:04:03.275: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 01:04:03.297: INFO: Pod pod-with-poststart-http-hook still exists May 16 01:04:05.297: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 01:04:05.302: INFO: Pod pod-with-poststart-http-hook still exists May 16 01:04:07.297: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 01:04:07.302: INFO: Pod pod-with-poststart-http-hook still exists May 16 01:04:09.297: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 01:04:09.302: INFO: Pod pod-with-poststart-http-hook still exists May 16 01:04:11.297: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 01:04:11.302: INFO: Pod pod-with-poststart-http-hook still exists May 16 01:04:13.297: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 01:04:13.302: INFO: Pod pod-with-poststart-http-hook still exists May 16 01:04:15.297: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 01:04:15.302: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:04:15.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5064" for this suite. • [SLOW TEST:20.225 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":249,"skipped":4137,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:04:15.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 01:04:15.401: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b3741745-db18-4651-a1d5-f2481b875d8a" in namespace "security-context-test-2871" to be "Succeeded or Failed" May 16 01:04:15.405: INFO: Pod "busybox-readonly-false-b3741745-db18-4651-a1d5-f2481b875d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.233893ms May 16 01:04:17.408: INFO: Pod "busybox-readonly-false-b3741745-db18-4651-a1d5-f2481b875d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006564533s May 16 01:04:19.412: INFO: Pod "busybox-readonly-false-b3741745-db18-4651-a1d5-f2481b875d8a": Phase="Running", Reason="", readiness=true. Elapsed: 4.010391507s May 16 01:04:21.416: INFO: Pod "busybox-readonly-false-b3741745-db18-4651-a1d5-f2481b875d8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014178908s May 16 01:04:21.416: INFO: Pod "busybox-readonly-false-b3741745-db18-4651-a1d5-f2481b875d8a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:04:21.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2871" for this suite. • [SLOW TEST:6.112 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":250,"skipped":4139,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:04:21.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9a7a60c2-89f7-433d-8e8e-b31da5820071 STEP: Creating a pod to test consume secrets May 16 01:04:21.520: INFO: Waiting up to 5m0s for pod "pod-secrets-39106546-b33a-4dd1-8be6-9dd1b62903f9" in namespace "secrets-3270" to be "Succeeded or Failed" May 16 01:04:21.539: INFO: Pod "pod-secrets-39106546-b33a-4dd1-8be6-9dd1b62903f9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.850471ms May 16 01:04:23.594: INFO: Pod "pod-secrets-39106546-b33a-4dd1-8be6-9dd1b62903f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074203548s May 16 01:04:25.617: INFO: Pod "pod-secrets-39106546-b33a-4dd1-8be6-9dd1b62903f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096928733s STEP: Saw pod success May 16 01:04:25.617: INFO: Pod "pod-secrets-39106546-b33a-4dd1-8be6-9dd1b62903f9" satisfied condition "Succeeded or Failed" May 16 01:04:25.620: INFO: Trying to get logs from node latest-worker pod pod-secrets-39106546-b33a-4dd1-8be6-9dd1b62903f9 container secret-env-test: STEP: delete the pod May 16 01:04:25.653: INFO: Waiting for pod pod-secrets-39106546-b33a-4dd1-8be6-9dd1b62903f9 to disappear May 16 01:04:25.668: INFO: Pod pod-secrets-39106546-b33a-4dd1-8be6-9dd1b62903f9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:04:25.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3270" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":251,"skipped":4144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:04:25.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 01:04:25.957: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 16 01:04:27.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8238 create -f -' May 16 01:04:31.201: INFO: stderr: "" May 16 01:04:31.202: INFO: stdout: "e2e-test-crd-publish-openapi-9397-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 16 01:04:31.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8238 delete e2e-test-crd-publish-openapi-9397-crds test-cr' May 16 01:04:31.326: INFO: stderr: "" May 16 01:04:31.326: INFO: stdout: "e2e-test-crd-publish-openapi-9397-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 16 01:04:31.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8238 apply -f -' May 16 01:04:31.585: INFO: stderr: "" May 16 01:04:31.585: INFO: stdout: "e2e-test-crd-publish-openapi-9397-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 16 01:04:31.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8238 delete e2e-test-crd-publish-openapi-9397-crds test-cr' May 16 01:04:31.710: INFO: stderr: "" May 16 01:04:31.711: INFO: stdout: "e2e-test-crd-publish-openapi-9397-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 16 01:04:31.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9397-crds' May 16 01:04:32.019: INFO: stderr: "" May 16 01:04:32.019: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9397-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:04:34.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8238" for this suite. • [SLOW TEST:9.259 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":252,"skipped":4167,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:04:34.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3514 STEP: creating service affinity-clusterip in namespace services-3514 STEP: creating replication controller affinity-clusterip in namespace services-3514 I0516 01:04:35.617854 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-3514, replica count: 3 I0516 01:04:38.668304 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 01:04:41.668558 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 01:04:41.676: INFO: Creating new exec pod May 16 01:04:46.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3514 execpod-affinity2hg4g -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 16 01:04:46.911: INFO: stderr: "I0516 01:04:46.819441 3342 log.go:172] (0xc00097cd10) (0xc000b58140) Create stream\nI0516 01:04:46.819476 3342 log.go:172] (0xc00097cd10) (0xc000b58140) Stream added, broadcasting: 1\nI0516 01:04:46.822074 3342 log.go:172] (0xc00097cd10) Reply frame received for 1\nI0516 01:04:46.822165 3342 log.go:172] (0xc00097cd10) (0xc00054b860) Create stream\nI0516 01:04:46.822197 3342 log.go:172] (0xc00097cd10) (0xc00054b860) Stream added, broadcasting: 3\nI0516 01:04:46.824107 3342 log.go:172] (0xc00097cd10) Reply frame received for 3\nI0516 01:04:46.824135 3342 log.go:172] (0xc00097cd10) (0xc0000f3900) Create stream\nI0516 01:04:46.824143 3342 log.go:172] (0xc00097cd10) (0xc0000f3900) Stream added, broadcasting: 5\nI0516 01:04:46.824994 3342 log.go:172] (0xc00097cd10) Reply frame received for 5\nI0516 01:04:46.903808 3342 log.go:172] (0xc00097cd10) Data frame received for 5\nI0516 01:04:46.903833 3342 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0516 01:04:46.903846 3342 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0516 01:04:46.904637 3342 log.go:172] (0xc00097cd10) Data frame received for 5\nI0516 01:04:46.904666 3342 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0516 01:04:46.904686 3342 log.go:172] (0xc0000f3900) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0516 01:04:46.905068 3342 log.go:172] (0xc00097cd10) Data frame received for 5\nI0516 01:04:46.905100 3342 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0516 01:04:46.905245 3342 log.go:172] (0xc00097cd10) Data frame received for 3\nI0516 01:04:46.905263 3342 log.go:172] (0xc00054b860) (3) Data frame handling\nI0516 01:04:46.906674 3342 log.go:172] (0xc00097cd10) Data frame received for 1\nI0516 01:04:46.906714 3342 log.go:172] (0xc000b58140) (1) Data frame handling\nI0516 01:04:46.906732 3342 log.go:172] (0xc000b58140) (1) Data frame sent\nI0516 01:04:46.906773 3342 log.go:172] (0xc00097cd10) (0xc000b58140) Stream removed, broadcasting: 1\nI0516 01:04:46.906810 3342 log.go:172] (0xc00097cd10) Go away received\nI0516 01:04:46.907086 3342 log.go:172] (0xc00097cd10) (0xc000b58140) Stream removed, broadcasting: 1\nI0516 01:04:46.907114 3342 log.go:172] (0xc00097cd10) (0xc00054b860) Stream removed, broadcasting: 3\nI0516 01:04:46.907134 3342 log.go:172] (0xc00097cd10) (0xc0000f3900) Stream removed, broadcasting: 5\n" May 16 01:04:46.911: INFO: stdout: "" May 16 01:04:46.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3514 execpod-affinity2hg4g -- /bin/sh -x -c nc -zv -t -w 2 10.105.230.239 80' May 16 01:04:47.118: INFO: stderr: "I0516 01:04:47.048216 3364 log.go:172] (0xc00003b600) (0xc000ad4500) Create stream\nI0516 01:04:47.048288 3364 log.go:172] (0xc00003b600) (0xc000ad4500) Stream added, broadcasting: 1\nI0516 01:04:47.052719 3364 log.go:172] (0xc00003b600) Reply frame received for 1\nI0516 01:04:47.052762 3364 log.go:172] (0xc00003b600) (0xc000846640) Create stream\nI0516 01:04:47.052776 3364 log.go:172] (0xc00003b600) (0xc000846640) Stream added, broadcasting: 3\nI0516 01:04:47.053732 3364 log.go:172] (0xc00003b600) Reply frame received for 3\nI0516 01:04:47.053756 3364 log.go:172] (0xc00003b600) (0xc000846fa0) Create stream\nI0516 01:04:47.053763 3364 log.go:172] (0xc00003b600) (0xc000846fa0) Stream added, broadcasting: 5\nI0516 01:04:47.054746 3364 log.go:172] (0xc00003b600) Reply frame received for 5\nI0516 01:04:47.112397 3364 log.go:172] (0xc00003b600) Data frame received for 5\nI0516 01:04:47.112440 3364 log.go:172] (0xc000846fa0) (5) Data frame handling\nI0516 01:04:47.112456 3364 log.go:172] (0xc000846fa0) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.230.239 80\nConnection to 10.105.230.239 80 port [tcp/http] succeeded!\nI0516 01:04:47.112483 3364 log.go:172] (0xc00003b600) Data frame received for 3\nI0516 01:04:47.112493 3364 log.go:172] (0xc000846640) (3) Data frame handling\nI0516 01:04:47.112563 3364 log.go:172] (0xc00003b600) Data frame received for 5\nI0516 01:04:47.112579 3364 log.go:172] (0xc000846fa0) (5) Data frame handling\nI0516 01:04:47.114064 3364 log.go:172] (0xc00003b600) Data frame received for 1\nI0516 01:04:47.114093 3364 log.go:172] (0xc000ad4500) (1) Data frame handling\nI0516 01:04:47.114132 3364 log.go:172] (0xc000ad4500) (1) Data frame sent\nI0516 01:04:47.114161 3364 log.go:172] (0xc00003b600) (0xc000ad4500) Stream removed, broadcasting: 1\nI0516 01:04:47.114316 3364 log.go:172] (0xc00003b600) Go away received\nI0516 01:04:47.114615 3364 log.go:172] (0xc00003b600) (0xc000ad4500) Stream removed, broadcasting: 1\nI0516 01:04:47.114636 3364 log.go:172] (0xc00003b600) (0xc000846640) Stream removed, broadcasting: 3\nI0516 01:04:47.114647 3364 log.go:172] (0xc00003b600) (0xc000846fa0) Stream removed, broadcasting: 5\n" May 16 01:04:47.118: INFO: stdout: "" May 16 01:04:47.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3514 execpod-affinity2hg4g -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.230.239:80/ ; done' May 16 01:04:47.403: INFO: stderr: "I0516 01:04:47.240560 3384 log.go:172] (0xc000a740b0) (0xc000516640) Create stream\nI0516 01:04:47.240637 3384 log.go:172] (0xc000a740b0) (0xc000516640) Stream added, broadcasting: 1\nI0516 01:04:47.244875 3384 log.go:172] (0xc000a740b0) Reply frame received for 1\nI0516 01:04:47.244937 3384 log.go:172] (0xc000a740b0) (0xc000306140) Create stream\nI0516 01:04:47.244967 3384 log.go:172] (0xc000a740b0) (0xc000306140) Stream added, broadcasting: 3\nI0516 01:04:47.248143 3384 log.go:172] (0xc000a740b0) Reply frame received for 3\nI0516 01:04:47.248159 3384 log.go:172] (0xc000a740b0) (0xc000306640) Create stream\nI0516 01:04:47.248165 3384 log.go:172] (0xc000a740b0) (0xc000306640) Stream added, broadcasting: 5\nI0516 01:04:47.248918 3384 log.go:172] (0xc000a740b0) Reply frame received for 5\nI0516 01:04:47.320958 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.320990 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.321009 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.321036 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.321049 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.321077 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.326278 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.326302 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.326319 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.327862 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.327887 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.327899 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.327923 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.327933 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.327941 3384 log.go:172] (0xc000306640) (5) Data frame sent\nI0516 01:04:47.327948 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.327961 3384 log.go:172] (0xc000306640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.328022 3384 log.go:172] (0xc000306640) (5) Data frame sent\nI0516 01:04:47.330648 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.330661 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.330667 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.331058 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.331106 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.331123 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -sI0516 01:04:47.331142 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.331160 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.331172 3384 log.go:172] (0xc000306640) (5) Data frame sent\n --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.331192 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.331202 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.331213 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.335523 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.335541 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.335552 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.335894 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.335949 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.335971 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ I0516 01:04:47.335988 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.336018 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.336035 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.336055 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.336070 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.336088 3384 log.go:172] (0xc000306640) (5) Data frame sent\nI0516 01:04:47.336096 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.336104 3384 log.go:172] (0xc000306640) (5) Data frame handling\necho\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.336119 3384 log.go:172] (0xc000306640) (5) Data frame sent\nI0516 01:04:47.339875 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.339898 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.339915 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.340158 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.340170 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.340189 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.340199 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.340205 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.340213 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.344286 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.344313 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.344327 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.344710 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.344730 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.344748 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.344764 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.344775 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.344796 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.348464 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.348483 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.348498 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.349757 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.349769 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.349776 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.349788 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.349805 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.349821 3384 log.go:172] (0xc000306640) (5) Data frame sent\nI0516 01:04:47.349845 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.349858 3384 log.go:172] (0xc000306640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.349877 3384 log.go:172] (0xc000306640) (5) Data frame sent\nI0516 01:04:47.354155 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.354169 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.354183 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.354541 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.354553 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.354560 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.354570 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.354576 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.354583 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.358133 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.358158 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.358191 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.358645 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.358666 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.358675 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.358683 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.358693 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.358761 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.362400 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.362421 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.362453 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.362935 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.362945 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.362950 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.362976 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.363008 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.363040 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.366642 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.366666 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.366684 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.367400 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.367416 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.367437 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.367509 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.367530 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.367546 3384 log.go:172] (0xc000306640) (5) Data frame sent\nI0516 01:04:47.367561 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.367579 3384 log.go:172] (0xc000306640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.367602 3384 log.go:172] (0xc000306640) (5) Data frame sent\nI0516 01:04:47.371807 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.371837 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.371855 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.372183 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.372211 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.372238 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.372252 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.372278 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.372303 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.375290 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.375306 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.375315 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.376076 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.376086 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.376097 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.376204 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.376215 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.376222 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.382431 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.382449 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.382469 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.382806 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.382819 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.382825 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.382834 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.382838 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.382843 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.387326 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.387354 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.387373 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.387807 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.387845 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.387858 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.387872 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.387889 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.387897 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.392033 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.392051 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.392068 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.392410 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.392434 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.392445 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.392464 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.392473 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.392481 3384 log.go:172] (0xc000306640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.230.239:80/\nI0516 01:04:47.396331 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.396351 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.396368 3384 log.go:172] (0xc000306140) (3) Data frame sent\nI0516 01:04:47.396789 3384 log.go:172] (0xc000a740b0) Data frame received for 3\nI0516 01:04:47.396813 3384 log.go:172] (0xc000306140) (3) Data frame handling\nI0516 01:04:47.396904 3384 log.go:172] (0xc000a740b0) Data frame received for 5\nI0516 01:04:47.396946 3384 log.go:172] (0xc000306640) (5) Data frame handling\nI0516 01:04:47.398435 3384 log.go:172] (0xc000a740b0) Data frame received for 1\nI0516 01:04:47.398466 3384 log.go:172] (0xc000516640) (1) Data frame handling\nI0516 01:04:47.398484 3384 log.go:172] (0xc000516640) (1) Data frame sent\nI0516 01:04:47.398549 3384 log.go:172] (0xc000a740b0) (0xc000516640) Stream removed, broadcasting: 1\nI0516 01:04:47.398921 3384 log.go:172] (0xc000a740b0) (0xc000516640) Stream removed, broadcasting: 1\nI0516 01:04:47.398944 3384 log.go:172] (0xc000a740b0) (0xc000306140) Stream removed, broadcasting: 3\nI0516 01:04:47.399124 3384 log.go:172] (0xc000a740b0) (0xc000306640) Stream removed, broadcasting: 5\n" May 16 01:04:47.404: INFO: stdout: "\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk\naffinity-clusterip-ltwdk" May 16 01:04:47.404: INFO: Received response from host: May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Received response from host: affinity-clusterip-ltwdk May 16 01:04:47.404: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-3514, will wait for the garbage collector to delete the pods May 16 01:04:47.919: INFO: Deleting ReplicationController affinity-clusterip took: 5.760693ms May 16 01:04:48.419: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.308955ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:04:55.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3514" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.423 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":253,"skipped":4174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:04:55.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-583afcf7-76ce-461b-b5e3-d3e90bf9e55b STEP: Creating a pod to test consume configMaps May 16 01:04:55.462: INFO: Waiting up to 5m0s for pod "pod-configmaps-1753c844-d330-4e1b-b64c-742bba9766e6" in namespace "configmap-6430" to be "Succeeded or Failed" May 16 01:04:55.466: INFO: Pod "pod-configmaps-1753c844-d330-4e1b-b64c-742bba9766e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142824ms May 16 01:04:57.521: INFO: Pod "pod-configmaps-1753c844-d330-4e1b-b64c-742bba9766e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059593187s May 16 01:04:59.525: INFO: Pod "pod-configmaps-1753c844-d330-4e1b-b64c-742bba9766e6": Phase="Running", Reason="", readiness=true. Elapsed: 4.063149315s May 16 01:05:01.529: INFO: Pod "pod-configmaps-1753c844-d330-4e1b-b64c-742bba9766e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066716848s STEP: Saw pod success May 16 01:05:01.529: INFO: Pod "pod-configmaps-1753c844-d330-4e1b-b64c-742bba9766e6" satisfied condition "Succeeded or Failed" May 16 01:05:01.531: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1753c844-d330-4e1b-b64c-742bba9766e6 container configmap-volume-test: STEP: delete the pod May 16 01:05:01.564: INFO: Waiting for pod pod-configmaps-1753c844-d330-4e1b-b64c-742bba9766e6 to disappear May 16 01:05:01.683: INFO: Pod pod-configmaps-1753c844-d330-4e1b-b64c-742bba9766e6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:05:01.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6430" for this suite. • [SLOW TEST:6.346 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":254,"skipped":4198,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:05:01.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3368 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 01:05:01.765: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 16 01:05:01.851: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 01:05:04.072: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 01:05:05.874: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:05:07.855: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:05:09.854: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:05:11.854: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:05:13.904: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:05:15.856: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:05:17.856: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:05:19.856: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 01:05:21.863: INFO: The status of Pod netserver-0 is Running (Ready = true) May 16 01:05:21.868: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 16 01:05:25.924: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.217:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3368 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 01:05:25.924: INFO: >>> kubeConfig: /root/.kube/config I0516 01:05:25.958223 7 log.go:172] (0xc00373e580) (0xc001c79360) Create stream I0516 01:05:25.958248 7 log.go:172] (0xc00373e580) (0xc001c79360) Stream added, broadcasting: 1 I0516 01:05:25.959489 7 log.go:172] (0xc00373e580) Reply frame received for 1 I0516 01:05:25.959516 7 log.go:172] (0xc00373e580) (0xc001c79540) Create stream I0516 01:05:25.959527 7 log.go:172] (0xc00373e580) (0xc001c79540) Stream added, broadcasting: 3 I0516 01:05:25.960237 7 log.go:172] (0xc00373e580) Reply frame received for 3 I0516 01:05:25.960269 7 log.go:172] (0xc00373e580) (0xc0013b6f00) Create stream I0516 01:05:25.960280 7 log.go:172] (0xc00373e580) (0xc0013b6f00) Stream added, broadcasting: 5 I0516 01:05:25.960962 7 log.go:172] (0xc00373e580) Reply frame received for 5 I0516 01:05:26.028128 7 log.go:172] (0xc00373e580) Data frame received for 3 I0516 01:05:26.028209 7 log.go:172] (0xc001c79540) (3) Data frame handling I0516 01:05:26.028231 7 log.go:172] (0xc001c79540) (3) Data frame sent I0516 01:05:26.028310 7 log.go:172] (0xc00373e580) Data frame received for 5 I0516 01:05:26.028332 7 log.go:172] (0xc0013b6f00) (5) Data frame handling I0516 01:05:26.028364 7 log.go:172] (0xc00373e580) Data frame received for 3 I0516 01:05:26.028387 7 log.go:172] (0xc001c79540) (3) Data frame handling I0516 01:05:26.030192 7 log.go:172] (0xc00373e580) Data frame received for 1 I0516 01:05:26.030207 7 log.go:172] (0xc001c79360) (1) Data frame handling I0516 01:05:26.030226 7 log.go:172] (0xc001c79360) (1) Data frame sent I0516 01:05:26.030235 7 log.go:172] (0xc00373e580) (0xc001c79360) Stream removed, broadcasting: 1 I0516 01:05:26.030279 7 log.go:172] (0xc00373e580) Go away received I0516 01:05:26.030312 7 log.go:172] (0xc00373e580) (0xc001c79360) Stream removed, broadcasting: 1 I0516 01:05:26.030320 7 log.go:172] (0xc00373e580) (0xc001c79540) Stream removed, broadcasting: 3 I0516 01:05:26.030325 7 log.go:172] (0xc00373e580) (0xc0013b6f00) Stream removed, broadcasting: 5 May 16 01:05:26.030: INFO: Found all expected endpoints: [netserver-0] May 16 01:05:26.032: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.18:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3368 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 01:05:26.032: INFO: >>> kubeConfig: /root/.kube/config I0516 01:05:26.059742 7 log.go:172] (0xc00381a6e0) (0xc0013b7900) Create stream I0516 01:05:26.059758 7 log.go:172] (0xc00381a6e0) (0xc0013b7900) Stream added, broadcasting: 1 I0516 01:05:26.061312 7 log.go:172] (0xc00381a6e0) Reply frame received for 1 I0516 01:05:26.061335 7 log.go:172] (0xc00381a6e0) (0xc001c797c0) Create stream I0516 01:05:26.061347 7 log.go:172] (0xc00381a6e0) (0xc001c797c0) Stream added, broadcasting: 3 I0516 01:05:26.062333 7 log.go:172] (0xc00381a6e0) Reply frame received for 3 I0516 01:05:26.062369 7 log.go:172] (0xc00381a6e0) (0xc00219e280) Create stream I0516 01:05:26.062384 7 log.go:172] (0xc00381a6e0) (0xc00219e280) Stream added, broadcasting: 5 I0516 01:05:26.063096 7 log.go:172] (0xc00381a6e0) Reply frame received for 5 I0516 01:05:26.124851 7 log.go:172] (0xc00381a6e0) Data frame received for 3 I0516 01:05:26.124888 7 log.go:172] (0xc001c797c0) (3) Data frame handling I0516 01:05:26.124924 7 log.go:172] (0xc001c797c0) (3) Data frame sent I0516 01:05:26.124995 7 log.go:172] (0xc00381a6e0) Data frame received for 3 I0516 01:05:26.125029 7 log.go:172] (0xc001c797c0) (3) Data frame handling I0516 01:05:26.125795 7 log.go:172] (0xc00381a6e0) Data frame received for 5 I0516 01:05:26.125830 7 log.go:172] (0xc00219e280) (5) Data frame handling I0516 01:05:26.126827 7 log.go:172] (0xc00381a6e0) Data frame received for 1 I0516 01:05:26.126921 7 log.go:172] (0xc0013b7900) (1) Data frame handling I0516 01:05:26.127008 7 log.go:172] (0xc0013b7900) (1) Data frame sent I0516 01:05:26.127076 7 log.go:172] (0xc00381a6e0) (0xc0013b7900) Stream removed, broadcasting: 1 I0516 01:05:26.127114 7 log.go:172] (0xc00381a6e0) Go away received I0516 01:05:26.127232 7 log.go:172] (0xc00381a6e0) (0xc0013b7900) Stream removed, broadcasting: 1 I0516 01:05:26.127254 7 log.go:172] (0xc00381a6e0) (0xc001c797c0) Stream removed, broadcasting: 3 I0516 01:05:26.127264 7 log.go:172] (0xc00381a6e0) (0xc00219e280) Stream removed, broadcasting: 5 May 16 01:05:26.127: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:05:26.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3368" for this suite. • [SLOW TEST:24.419 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4204,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:05:26.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3802 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 16 01:05:26.235: INFO: Found 0 stateful pods, waiting for 3 May 16 01:05:36.241: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 01:05:36.241: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 01:05:36.241: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 16 01:05:46.241: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 01:05:46.241: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 01:05:46.241: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 16 01:05:46.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3802 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 01:05:46.522: INFO: stderr: "I0516 01:05:46.391804 3403 log.go:172] (0xc000ba13f0) (0xc000392140) Create stream\nI0516 01:05:46.391852 3403 log.go:172] (0xc000ba13f0) (0xc000392140) Stream added, broadcasting: 1\nI0516 01:05:46.394503 3403 log.go:172] (0xc000ba13f0) Reply frame received for 1\nI0516 01:05:46.394544 3403 log.go:172] (0xc000ba13f0) (0xc0005486e0) Create stream\nI0516 01:05:46.394556 3403 log.go:172] (0xc000ba13f0) (0xc0005486e0) Stream added, broadcasting: 3\nI0516 01:05:46.395745 3403 log.go:172] (0xc000ba13f0) Reply frame received for 3\nI0516 01:05:46.395781 3403 log.go:172] (0xc000ba13f0) (0xc0005e23c0) Create stream\nI0516 01:05:46.395793 3403 log.go:172] (0xc000ba13f0) (0xc0005e23c0) Stream added, broadcasting: 5\nI0516 01:05:46.396822 3403 log.go:172] (0xc000ba13f0) Reply frame received for 5\nI0516 01:05:46.484396 3403 log.go:172] (0xc000ba13f0) Data frame received for 5\nI0516 01:05:46.484417 3403 log.go:172] (0xc0005e23c0) (5) Data frame handling\nI0516 01:05:46.484428 3403 log.go:172] (0xc0005e23c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 01:05:46.514805 3403 log.go:172] (0xc000ba13f0) Data frame received for 5\nI0516 01:05:46.514840 3403 log.go:172] (0xc0005e23c0) (5) Data frame handling\nI0516 01:05:46.514865 3403 log.go:172] (0xc000ba13f0) Data frame received for 3\nI0516 01:05:46.514876 3403 log.go:172] (0xc0005486e0) (3) Data frame handling\nI0516 01:05:46.514898 3403 log.go:172] (0xc0005486e0) (3) Data frame sent\nI0516 01:05:46.515141 3403 log.go:172] (0xc000ba13f0) Data frame received for 3\nI0516 01:05:46.515166 3403 log.go:172] (0xc0005486e0) (3) Data frame handling\nI0516 01:05:46.516701 3403 log.go:172] (0xc000ba13f0) Data frame received for 1\nI0516 01:05:46.516720 3403 log.go:172] (0xc000392140) (1) Data frame handling\nI0516 01:05:46.516739 3403 log.go:172] (0xc000392140) (1) Data frame sent\nI0516 01:05:46.516751 3403 log.go:172] (0xc000ba13f0) (0xc000392140) Stream removed, broadcasting: 1\nI0516 01:05:46.516768 3403 log.go:172] (0xc000ba13f0) Go away received\nI0516 01:05:46.517380 3403 log.go:172] (0xc000ba13f0) (0xc000392140) Stream removed, broadcasting: 1\nI0516 01:05:46.517401 3403 log.go:172] (0xc000ba13f0) (0xc0005486e0) Stream removed, broadcasting: 3\nI0516 01:05:46.517411 3403 log.go:172] (0xc000ba13f0) (0xc0005e23c0) Stream removed, broadcasting: 5\n" May 16 01:05:46.522: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 01:05:46.522: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 16 01:05:56.555: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 16 01:06:06.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3802 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 01:06:06.802: INFO: stderr: "I0516 01:06:06.712499 3423 log.go:172] (0xc0000e0d10) (0xc00059ef00) Create stream\nI0516 01:06:06.712581 3423 log.go:172] (0xc0000e0d10) (0xc00059ef00) Stream added, broadcasting: 1\nI0516 01:06:06.715840 3423 log.go:172] (0xc0000e0d10) Reply frame received for 1\nI0516 01:06:06.715875 3423 log.go:172] (0xc0000e0d10) (0xc000538500) Create stream\nI0516 01:06:06.715897 3423 log.go:172] (0xc0000e0d10) (0xc000538500) Stream added, broadcasting: 3\nI0516 01:06:06.716764 3423 log.go:172] (0xc0000e0d10) Reply frame received for 3\nI0516 01:06:06.716819 3423 log.go:172] (0xc0000e0d10) (0xc0004c01e0) Create stream\nI0516 01:06:06.716834 3423 log.go:172] (0xc0000e0d10) (0xc0004c01e0) Stream added, broadcasting: 5\nI0516 01:06:06.717762 3423 log.go:172] (0xc0000e0d10) Reply frame received for 5\nI0516 01:06:06.794223 3423 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0516 01:06:06.794266 3423 log.go:172] (0xc0004c01e0) (5) Data frame handling\nI0516 01:06:06.794363 3423 log.go:172] (0xc0004c01e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 01:06:06.794398 3423 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0516 01:06:06.794442 3423 log.go:172] (0xc0004c01e0) (5) Data frame handling\nI0516 01:06:06.794483 3423 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0516 01:06:06.794512 3423 log.go:172] (0xc000538500) (3) Data frame handling\nI0516 01:06:06.794537 3423 log.go:172] (0xc000538500) (3) Data frame sent\nI0516 01:06:06.794550 3423 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0516 01:06:06.794559 3423 log.go:172] (0xc000538500) (3) Data frame handling\nI0516 01:06:06.796330 3423 log.go:172] (0xc0000e0d10) Data frame received for 1\nI0516 01:06:06.796433 3423 log.go:172] (0xc00059ef00) (1) Data frame handling\nI0516 01:06:06.796562 3423 log.go:172] (0xc00059ef00) (1) Data frame sent\nI0516 01:06:06.796601 3423 log.go:172] (0xc0000e0d10) (0xc00059ef00) Stream removed, broadcasting: 1\nI0516 01:06:06.796642 3423 log.go:172] (0xc0000e0d10) Go away received\nI0516 01:06:06.797418 3423 log.go:172] (0xc0000e0d10) (0xc00059ef00) Stream removed, broadcasting: 1\nI0516 01:06:06.797445 3423 log.go:172] (0xc0000e0d10) (0xc000538500) Stream removed, broadcasting: 3\nI0516 01:06:06.797459 3423 log.go:172] (0xc0000e0d10) (0xc0004c01e0) Stream removed, broadcasting: 5\n" May 16 01:06:06.802: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 01:06:06.802: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 01:06:16.822: INFO: Waiting for StatefulSet statefulset-3802/ss2 to complete update May 16 01:06:16.822: INFO: Waiting for Pod statefulset-3802/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 16 01:06:16.822: INFO: Waiting for Pod statefulset-3802/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 16 01:06:26.831: INFO: Waiting for StatefulSet statefulset-3802/ss2 to complete update May 16 01:06:26.831: INFO: Waiting for Pod statefulset-3802/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 16 01:06:36.831: INFO: Waiting for StatefulSet statefulset-3802/ss2 to complete update STEP: Rolling back to a previous revision May 16 01:06:46.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3802 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 01:06:47.126: INFO: stderr: "I0516 01:06:46.977476 3444 log.go:172] (0xc000b634a0) (0xc000a8e500) Create stream\nI0516 01:06:46.977532 3444 log.go:172] (0xc000b634a0) (0xc000a8e500) Stream added, broadcasting: 1\nI0516 01:06:46.982568 3444 log.go:172] (0xc000b634a0) Reply frame received for 1\nI0516 01:06:46.982606 3444 log.go:172] (0xc000b634a0) (0xc000618460) Create stream\nI0516 01:06:46.982616 3444 log.go:172] (0xc000b634a0) (0xc000618460) Stream added, broadcasting: 3\nI0516 01:06:46.983403 3444 log.go:172] (0xc000b634a0) Reply frame received for 3\nI0516 01:06:46.983427 3444 log.go:172] (0xc000b634a0) (0xc00059c460) Create stream\nI0516 01:06:46.983434 3444 log.go:172] (0xc000b634a0) (0xc00059c460) Stream added, broadcasting: 5\nI0516 01:06:46.984317 3444 log.go:172] (0xc000b634a0) Reply frame received for 5\nI0516 01:06:47.082408 3444 log.go:172] (0xc000b634a0) Data frame received for 5\nI0516 01:06:47.082432 3444 log.go:172] (0xc00059c460) (5) Data frame handling\nI0516 01:06:47.082443 3444 log.go:172] (0xc00059c460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 01:06:47.116487 3444 log.go:172] (0xc000b634a0) Data frame received for 3\nI0516 01:06:47.116525 3444 log.go:172] (0xc000618460) (3) Data frame handling\nI0516 01:06:47.116567 3444 log.go:172] (0xc000618460) (3) Data frame sent\nI0516 01:06:47.116780 3444 log.go:172] (0xc000b634a0) Data frame received for 3\nI0516 01:06:47.116805 3444 log.go:172] (0xc000618460) (3) Data frame handling\nI0516 01:06:47.116831 3444 log.go:172] (0xc000b634a0) Data frame received for 5\nI0516 01:06:47.116840 3444 log.go:172] (0xc00059c460) (5) Data frame handling\nI0516 01:06:47.119351 3444 log.go:172] (0xc000b634a0) Data frame received for 1\nI0516 01:06:47.119388 3444 log.go:172] (0xc000a8e500) (1) Data frame handling\nI0516 01:06:47.119422 3444 log.go:172] (0xc000a8e500) (1) Data frame sent\nI0516 01:06:47.119457 3444 log.go:172] (0xc000b634a0) (0xc000a8e500) Stream removed, broadcasting: 1\nI0516 01:06:47.119654 3444 log.go:172] (0xc000b634a0) Go away received\nI0516 01:06:47.119894 3444 log.go:172] (0xc000b634a0) (0xc000a8e500) Stream removed, broadcasting: 1\nI0516 01:06:47.119916 3444 log.go:172] (0xc000b634a0) (0xc000618460) Stream removed, broadcasting: 3\nI0516 01:06:47.119934 3444 log.go:172] (0xc000b634a0) (0xc00059c460) Stream removed, broadcasting: 5\n" May 16 01:06:47.126: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 01:06:47.126: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 01:06:57.157: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 16 01:07:07.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3802 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 01:07:07.509: INFO: stderr: "I0516 01:07:07.370683 3463 log.go:172] (0xc00003b4a0) (0xc000980640) Create stream\nI0516 01:07:07.370727 3463 log.go:172] (0xc00003b4a0) (0xc000980640) Stream added, broadcasting: 1\nI0516 01:07:07.373975 3463 log.go:172] (0xc00003b4a0) Reply frame received for 1\nI0516 01:07:07.374020 3463 log.go:172] (0xc00003b4a0) (0xc0004163c0) Create stream\nI0516 01:07:07.374043 3463 log.go:172] (0xc00003b4a0) (0xc0004163c0) Stream added, broadcasting: 3\nI0516 01:07:07.374994 3463 log.go:172] (0xc00003b4a0) Reply frame received for 3\nI0516 01:07:07.375036 3463 log.go:172] (0xc00003b4a0) (0xc000970000) Create stream\nI0516 01:07:07.375050 3463 log.go:172] (0xc00003b4a0) (0xc000970000) Stream added, broadcasting: 5\nI0516 01:07:07.375629 3463 log.go:172] (0xc00003b4a0) Reply frame received for 5\nI0516 01:07:07.503689 3463 log.go:172] (0xc00003b4a0) Data frame received for 3\nI0516 01:07:07.503723 3463 log.go:172] (0xc0004163c0) (3) Data frame handling\nI0516 01:07:07.503738 3463 log.go:172] (0xc0004163c0) (3) Data frame sent\nI0516 01:07:07.503747 3463 log.go:172] (0xc00003b4a0) Data frame received for 3\nI0516 01:07:07.503756 3463 log.go:172] (0xc0004163c0) (3) Data frame handling\nI0516 01:07:07.503785 3463 log.go:172] (0xc00003b4a0) Data frame received for 5\nI0516 01:07:07.503798 3463 log.go:172] (0xc000970000) (5) Data frame handling\nI0516 01:07:07.503819 3463 log.go:172] (0xc000970000) (5) Data frame sent\nI0516 01:07:07.503837 3463 log.go:172] (0xc00003b4a0) Data frame received for 5\nI0516 01:07:07.503846 3463 log.go:172] (0xc000970000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 01:07:07.504695 3463 log.go:172] (0xc00003b4a0) Data frame received for 1\nI0516 01:07:07.504818 3463 log.go:172] (0xc000980640) (1) Data frame handling\nI0516 01:07:07.504855 3463 log.go:172] (0xc000980640) (1) Data frame sent\nI0516 01:07:07.504879 3463 log.go:172] (0xc00003b4a0) (0xc000980640) Stream removed, broadcasting: 1\nI0516 01:07:07.504905 3463 log.go:172] (0xc00003b4a0) Go away received\nI0516 01:07:07.505290 3463 log.go:172] (0xc00003b4a0) (0xc000980640) Stream removed, broadcasting: 1\nI0516 01:07:07.505307 3463 log.go:172] (0xc00003b4a0) (0xc0004163c0) Stream removed, broadcasting: 3\nI0516 01:07:07.505316 3463 log.go:172] (0xc00003b4a0) (0xc000970000) Stream removed, broadcasting: 5\n" May 16 01:07:07.509: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 01:07:07.509: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 01:07:17.676: INFO: Waiting for StatefulSet statefulset-3802/ss2 to complete update May 16 01:07:17.676: INFO: Waiting for Pod statefulset-3802/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 16 01:07:17.676: INFO: Waiting for Pod statefulset-3802/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 16 01:07:27.684: INFO: Waiting for StatefulSet statefulset-3802/ss2 to complete update May 16 01:07:27.684: INFO: Waiting for Pod statefulset-3802/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 16 01:07:37.686: INFO: Deleting all statefulset in ns statefulset-3802 May 16 01:07:37.689: INFO: Scaling statefulset ss2 to 0 May 16 01:07:57.720: INFO: Waiting for statefulset status.replicas updated to 0 May 16 01:07:57.723: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:07:57.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3802" for this suite. • [SLOW TEST:151.614 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":256,"skipped":4215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:07:57.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 01:07:57.785: INFO: Creating deployment "webserver-deployment" May 16 01:07:57.796: INFO: Waiting for observed generation 1 May 16 01:07:59.807: INFO: Waiting for all required pods to come up May 16 01:07:59.813: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 16 01:08:09.846: INFO: Waiting for deployment "webserver-deployment" to complete May 16 01:08:09.850: INFO: Updating deployment "webserver-deployment" with a non-existent image May 16 01:08:09.864: INFO: Updating deployment webserver-deployment May 16 01:08:09.864: INFO: Waiting for observed generation 2 May 16 01:08:11.991: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 16 01:08:11.993: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 16 01:08:12.218: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 16 01:08:12.587: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 16 01:08:12.587: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 16 01:08:12.591: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 16 01:08:12.596: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 16 01:08:12.596: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 16 01:08:12.604: INFO: Updating deployment webserver-deployment May 16 01:08:12.604: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 16 01:08:12.951: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 16 01:08:13.105: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 16 01:08:13.305: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3512 /apis/apps/v1/namespaces/deployment-3512/deployments/webserver-deployment 8b3395f6-56cf-463a-96fa-041450d02d0d 5027902 3 2020-05-16 01:07:57 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-16 01:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041099a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-16 01:08:10 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-16 01:08:12 +0000 UTC,LastTransitionTime:2020-05-16 01:08:12 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 16 01:08:13.434: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-3512 /apis/apps/v1/namespaces/deployment-3512/replicasets/webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 5027944 3 2020-05-16 01:08:09 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8b3395f6-56cf-463a-96fa-041450d02d0d 0xc0041f6067 0xc0041f6068}] [] [{kube-controller-manager Update apps/v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b3395f6-56cf-463a-96fa-041450d02d0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041f60f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 01:08:13.434: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 16 01:08:13.434: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-3512 /apis/apps/v1/namespaces/deployment-3512/replicasets/webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 5027940 3 2020-05-16 01:07:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8b3395f6-56cf-463a-96fa-041450d02d0d 0xc0041f6157 0xc0041f6158}] [] [{kube-controller-manager Update apps/v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b3395f6-56cf-463a-96fa-041450d02d0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041f61e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 16 01:08:13.522: INFO: Pod "webserver-deployment-6676bcd6d4-2tbfm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2tbfm webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-2tbfm f1071745-3cb7-49ba-9caf-e434199d4dde 5027843 0 2020-05-16 01:08:09 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004109e57 0xc004109e58}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-16 01:08:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.522: INFO: Pod "webserver-deployment-6676bcd6d4-4jtth" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4jtth webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-4jtth e7854e93-b112-4137-99a2-5c8a47d6cf4b 5027858 0 2020-05-16 01:08:09 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248017 0xc004248018}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-16 01:08:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.522: INFO: Pod "webserver-deployment-6676bcd6d4-7bzkv" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7bzkv webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-7bzkv 58e6a754-18d8-4480-8a73-64ccb1e9099f 5027937 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc0042481d7 0xc0042481d8}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.522: INFO: Pod "webserver-deployment-6676bcd6d4-b8gpk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b8gpk webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-b8gpk 558af2e7-6728-43bc-9bf0-1237d4da905d 5027942 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248317 0xc004248318}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.523: INFO: Pod "webserver-deployment-6676bcd6d4-cnj6c" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cnj6c webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-cnj6c 2e6ad112-b8dc-4f4a-b5bd-6ba84d61a99c 5027908 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248457 0xc004248458}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.523: INFO: Pod "webserver-deployment-6676bcd6d4-d5hdm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-d5hdm webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-d5hdm f9a83b82-7f8c-419e-b9e7-190b1f4244fb 5027872 0 2020-05-16 01:08:10 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248597 0xc004248598}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-16 01:08:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.523: INFO: Pod "webserver-deployment-6676bcd6d4-frv6w" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-frv6w webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-frv6w 644a9c28-e62b-4569-ba8d-3c4210ba694a 5027870 0 2020-05-16 01:08:10 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248747 0xc004248748}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-16 01:08:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.523: INFO: Pod "webserver-deployment-6676bcd6d4-kvfph" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kvfph webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-kvfph 1733dfd7-e4ad-4dbb-b240-a6d9459dd172 5027939 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc0042488f7 0xc0042488f8}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.523: INFO: Pod "webserver-deployment-6676bcd6d4-lqrw8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lqrw8 webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-lqrw8 5ff34d3c-e19f-421d-a733-9f8d561a29f2 5027906 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248a37 0xc004248a38}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.524: INFO: Pod "webserver-deployment-6676bcd6d4-m4vdd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-m4vdd webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-m4vdd 99157894-5fe2-4a36-9fbc-40ccf30badd1 5027932 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248b77 0xc004248b78}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.524: INFO: Pod "webserver-deployment-6676bcd6d4-mbnbj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mbnbj webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-mbnbj 541fa5f2-425c-4e5b-91d5-31fa90e6d85f 5027938 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248cb7 0xc004248cb8}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.524: INFO: Pod "webserver-deployment-6676bcd6d4-qdtsd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qdtsd webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-qdtsd 470d2512-ed95-48d4-94bc-0f5a88a320ae 5027848 0 2020-05-16 01:08:09 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248df7 0xc004248df8}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-16 01:08:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.524: INFO: Pod "webserver-deployment-6676bcd6d4-s5zkg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-s5zkg webserver-deployment-6676bcd6d4- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-6676bcd6d4-s5zkg 218b0f9a-8a49-4f06-9a84-0ae4a8e8d481 5027941 0 2020-05-16 01:08:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 4543147b-0efd-475f-9595-fc12679aeb2c 0xc004248fa7 0xc004248fa8}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4543147b-0efd-475f-9595-fc12679aeb2c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-16 01:08:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.525: INFO: Pod "webserver-deployment-84855cf797-4c9jv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4c9jv webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-4c9jv 6771aa0d-4848-4d82-aab4-d0c004e698d6 5027912 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc004249167 0xc004249168}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.525: INFO: Pod "webserver-deployment-84855cf797-5gqgr" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5gqgr webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-5gqgr 8d7ca881-0d22-4448-810e-9eaf0d404133 5027734 0 2020-05-16 01:07:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042492b7 0xc0042492b8}] [] [{kube-controller-manager Update v1 2020-05-16 01:07:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.229\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.229,StartTime:2020-05-16 01:07:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 01:08:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e8c4575b5e66d0cdc8cf11b3c018df13cf06e765fd3f8c634696c7e02e9dd01b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.525: INFO: Pod "webserver-deployment-84855cf797-5h96z" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5h96z webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-5h96z 24e11f0c-6814-4e64-a57c-2252707af8a0 5027722 0 2020-05-16 01:07:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc004249467 0xc004249468}] [] [{kube-controller-manager Update v1 2020-05-16 01:07:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.228\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.228,StartTime:2020-05-16 01:07:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 01:08:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e2baf3c83b20209a9788cecb168bfc4f77c0e9d1c5df7bb0f9f4f7e9d0c85326,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.525: INFO: Pod "webserver-deployment-84855cf797-6xk7m" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6xk7m webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-6xk7m 4c9c149a-5f0a-45a2-9501-4b1008e2583a 5027915 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc004249647 0xc004249648}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.525: INFO: Pod "webserver-deployment-84855cf797-8d2fn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8d2fn webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-8d2fn 5ea1f613-37bc-42a2-aedd-98d5dc22598e 5027936 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc004249787 0xc004249788}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.526: INFO: Pod "webserver-deployment-84855cf797-97fzd" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-97fzd webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-97fzd 0f09aa6e-656e-43e2-8d3a-1fe80483a09e 5027686 0 2020-05-16 01:07:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042498b7 0xc0042498b8}] [] [{kube-controller-manager Update v1 2020-05-16 01:07:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.225\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.225,StartTime:2020-05-16 01:07:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 01:08:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8e4d5e1be6b8281fbeaf1e621954dad21e6091870c1f267ae8f22f2f0803cb18,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.526: INFO: Pod "webserver-deployment-84855cf797-bnv5p" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bnv5p webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-bnv5p e3c19d41-a1bb-4d6a-ab42-9f4a35586d62 5027931 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc004249a67 0xc004249a68}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.526: INFO: Pod "webserver-deployment-84855cf797-bwx7p" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bwx7p webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-bwx7p b91ccb79-495b-48af-aaa8-820ba3d8e9b2 5027894 0 2020-05-16 01:08:12 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc004249b97 0xc004249b98}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.526: INFO: Pod "webserver-deployment-84855cf797-cbmrh" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cbmrh webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-cbmrh ed6576d6-80f3-4911-ae73-a17555dde2c1 5027933 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc004249cc7 0xc004249cc8}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.526: INFO: Pod "webserver-deployment-84855cf797-dxck2" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dxck2 webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-dxck2 4bbb1826-9343-4596-b0fd-7ab111e439f7 5027711 0 2020-05-16 01:07:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc004249df7 0xc004249df8}] [] [{kube-controller-manager Update v1 2020-05-16 01:07:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.227\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.227,StartTime:2020-05-16 01:07:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 01:08:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://898a5e8a0045c96ba5467818b11ec1ddd60343bb989eaf14eaa063c1d493aff9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.527: INFO: Pod "webserver-deployment-84855cf797-f75g4" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-f75g4 webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-f75g4 1f9d14f4-6b25-46e5-b6dc-e783d2ced0a3 5027700 0 2020-05-16 01:07:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc004249fa7 0xc004249fa8}] [] [{kube-controller-manager Update v1 2020-05-16 01:07:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.226\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.226,StartTime:2020-05-16 01:07:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 01:08:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ccddc24abf86399cfb782aceeabbcfe2cee56506f4a380df2710107ca990d376,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.527: INFO: Pod "webserver-deployment-84855cf797-hj4zh" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hj4zh webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-hj4zh 897b6811-54c4-4a71-b4ce-643c77322863 5027909 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042ac157 0xc0042ac158}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.527: INFO: Pod "webserver-deployment-84855cf797-k2ckc" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-k2ckc webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-k2ckc 543768f9-2767-4507-b5a4-3eb12e14d67d 5027728 0 2020-05-16 01:07:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042ac287 0xc0042ac288}] [] [{kube-controller-manager Update v1 2020-05-16 01:07:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.23\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.23,StartTime:2020-05-16 01:07:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 01:08:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://02722d1c44332890c4df52cd2f01c5adcd91f89bde6d4fd70d0d74addd595808,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.527: INFO: Pod "webserver-deployment-84855cf797-kd575" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kd575 webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-kd575 2872ba49-6c7e-4658-8673-b6ddde3282a7 5027934 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042ac437 0xc0042ac438}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.527: INFO: Pod "webserver-deployment-84855cf797-l4l6n" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-l4l6n webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-l4l6n 14306cc3-676b-48a4-b6ae-3a43b703138d 5027935 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042ac567 0xc0042ac568}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.527: INFO: Pod "webserver-deployment-84855cf797-nr5k8" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nr5k8 webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-nr5k8 49f321c1-d508-4a05-9a48-d93a9ba64c77 5027962 0 2020-05-16 01:08:12 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042ac697 0xc0042ac698}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-16 01:08:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.528: INFO: Pod "webserver-deployment-84855cf797-nt92s" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nt92s webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-nt92s 4d1d9b01-ba73-4cc0-92a6-af282c9e6d0a 5027943 0 2020-05-16 01:08:12 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042ac827 0xc0042ac828}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-16 01:08:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.528: INFO: Pod "webserver-deployment-84855cf797-sgjgk" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-sgjgk webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-sgjgk 41a84d1e-698e-41e9-b5b1-cc02848152ce 5027735 0 2020-05-16 01:07:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042ac9b7 0xc0042ac9b8}] [] [{kube-controller-manager Update v1 2020-05-16 01:07:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.24\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.24,StartTime:2020-05-16 01:07:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 01:08:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d4977ff831469257e7e51e63242a64d26830669176994abdbad2488f8ee003c8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.528: INFO: Pod "webserver-deployment-84855cf797-wv7lr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wv7lr webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-wv7lr 04af1c23-ee05-45d1-b85b-895d994d1a40 5027921 0 2020-05-16 01:08:13 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042acb67 0xc0042acb68}] [] [{kube-controller-manager Update v1 2020-05-16 01:08:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 01:08:13.528: INFO: Pod "webserver-deployment-84855cf797-x2dh4" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-x2dh4 webserver-deployment-84855cf797- deployment-3512 /api/v1/namespaces/deployment-3512/pods/webserver-deployment-84855cf797-x2dh4 ca85a07e-8cb9-4590-bf27-41b1c2832a7d 5027721 0 2020-05-16 01:07:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 47105a34-213d-49f3-9ee1-2c36cb0bbfc8 0xc0042acc97 0xc0042acc98}] [] [{kube-controller-manager Update v1 2020-05-16 01:07:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47105a34-213d-49f3-9ee1-2c36cb0bbfc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:08:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.22\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d7j82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d7j82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d7j82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.22,StartTime:2020-05-16 01:07:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 01:08:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://29fa73552f3708502d66d516fab419e08fd399524755999de18566ee2dc55275,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:08:13.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3512" for this suite. • [SLOW TEST:15.952 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":257,"skipped":4246,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:08:13.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-fdfae0e7-1b56-4560-8f10-5b9c14b8e1f4 STEP: Creating a pod to test consume configMaps May 16 01:08:13.990: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52" in namespace "projected-7993" to be "Succeeded or Failed" May 16 01:08:14.014: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Pending", Reason="", readiness=false. Elapsed: 23.156112ms May 16 01:08:16.031: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040684163s May 16 01:08:18.146: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155800325s May 16 01:08:20.658: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667373162s May 16 01:08:23.182: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Pending", Reason="", readiness=false. Elapsed: 9.19191371s May 16 01:08:25.663: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Pending", Reason="", readiness=false. Elapsed: 11.672436577s May 16 01:08:28.517: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Pending", Reason="", readiness=false. Elapsed: 14.526555745s May 16 01:08:30.619: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Pending", Reason="", readiness=false. Elapsed: 16.628763887s May 16 01:08:32.949: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Pending", Reason="", readiness=false. Elapsed: 18.958056034s May 16 01:08:35.009: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.018357334s STEP: Saw pod success May 16 01:08:35.009: INFO: Pod "pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52" satisfied condition "Succeeded or Failed" May 16 01:08:35.073: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52 container projected-configmap-volume-test: STEP: delete the pod May 16 01:08:36.087: INFO: Waiting for pod pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52 to disappear May 16 01:08:36.128: INFO: Pod pod-projected-configmaps-42eb7587-aeb4-48c2-b415-8de24d34cc52 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:08:36.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7993" for this suite. • [SLOW TEST:22.918 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":258,"skipped":4250,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:08:36.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 01:08:38.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d" in namespace "downward-api-5899" to be "Succeeded or Failed" May 16 01:08:38.244: INFO: Pod "downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 235.285726ms May 16 01:08:40.255: INFO: Pod "downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246963548s May 16 01:08:42.380: INFO: Pod "downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371452931s May 16 01:08:44.415: INFO: Pod "downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406630762s May 16 01:08:46.494: INFO: Pod "downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d": Phase="Running", Reason="", readiness=true. Elapsed: 8.485428148s May 16 01:08:48.579: INFO: Pod "downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.570077246s STEP: Saw pod success May 16 01:08:48.579: INFO: Pod "downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d" satisfied condition "Succeeded or Failed" May 16 01:08:48.591: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d container client-container: STEP: delete the pod May 16 01:08:48.902: INFO: Waiting for pod downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d to disappear May 16 01:08:48.907: INFO: Pod downwardapi-volume-d30c55a0-b20b-48f1-87e8-faa5ee92cc0d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:08:48.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5899" for this suite. • [SLOW TEST:12.301 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4257,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:08:48.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 16 01:08:56.827: INFO: 10 pods remaining May 16 01:08:56.827: INFO: 10 pods has nil DeletionTimestamp May 16 01:08:56.827: INFO: May 16 01:08:57.923: INFO: 9 pods remaining May 16 01:08:57.923: INFO: 0 pods has nil DeletionTimestamp May 16 01:08:57.923: INFO: May 16 01:08:58.184: INFO: 0 pods remaining May 16 01:08:58.184: INFO: 0 pods has nil DeletionTimestamp May 16 01:08:58.184: INFO: May 16 01:08:59.536: INFO: 0 pods remaining May 16 01:08:59.536: INFO: 0 pods has nil DeletionTimestamp May 16 01:08:59.536: INFO: STEP: Gathering metrics W0516 01:09:00.300204 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 01:09:00.300: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:09:00.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6273" for this suite. • [SLOW TEST:11.615 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":260,"skipped":4261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:09:00.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 16 01:09:02.274: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 01:09:02.619: INFO: Waiting for terminating namespaces to be deleted... May 16 01:09:02.624: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 16 01:09:02.632: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 16 01:09:02.633: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 16 01:09:02.633: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 16 01:09:02.633: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 16 01:09:02.633: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 01:09:02.633: INFO: Container kindnet-cni ready: true, restart count 0 May 16 01:09:02.633: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 01:09:02.633: INFO: Container kube-proxy ready: true, restart count 0 May 16 01:09:02.633: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 16 01:09:02.637: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 16 01:09:02.637: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 16 01:09:02.637: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 16 01:09:02.637: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 16 01:09:02.637: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 01:09:02.637: INFO: Container kindnet-cni ready: true, restart count 0 May 16 01:09:02.637: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 01:09:02.637: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 16 01:09:03.339: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 16 01:09:03.340: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 16 01:09:03.340: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 16 01:09:03.340: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 16 01:09:03.340: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 16 01:09:03.340: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 16 01:09:03.340: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 16 01:09:03.406: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3aed3caa-e7fa-42d7-82a6-5948adbd9cad.160f5ce7a3c16f40], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3583/filler-pod-3aed3caa-e7fa-42d7-82a6-5948adbd9cad to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-3aed3caa-e7fa-42d7-82a6-5948adbd9cad.160f5ce840ae63e3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3aed3caa-e7fa-42d7-82a6-5948adbd9cad.160f5ce8a6206623], Reason = [Created], Message = [Created container filler-pod-3aed3caa-e7fa-42d7-82a6-5948adbd9cad] STEP: Considering event: Type = [Normal], Name = [filler-pod-3aed3caa-e7fa-42d7-82a6-5948adbd9cad.160f5ce8b91d2dd9], Reason = [Started], Message = [Started container filler-pod-3aed3caa-e7fa-42d7-82a6-5948adbd9cad] STEP: Considering event: Type = [Normal], Name = [filler-pod-698c27de-9995-4ada-bc0c-da7da4cc7382.160f5ce7a5a1eae5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3583/filler-pod-698c27de-9995-4ada-bc0c-da7da4cc7382 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-698c27de-9995-4ada-bc0c-da7da4cc7382.160f5ce8047a7957], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-698c27de-9995-4ada-bc0c-da7da4cc7382.160f5ce86161bfcb], Reason = [Created], Message = [Created container filler-pod-698c27de-9995-4ada-bc0c-da7da4cc7382] STEP: Considering event: Type = [Normal], Name = [filler-pod-698c27de-9995-4ada-bc0c-da7da4cc7382.160f5ce87d5695a4], Reason = [Started], Message = [Started container filler-pod-698c27de-9995-4ada-bc0c-da7da4cc7382] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f5ce9102226b7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f5ce91086c46f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:09:10.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3583" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.173 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":261,"skipped":4284,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:09:10.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 16 01:09:11.048: INFO: >>> kubeConfig: /root/.kube/config May 16 01:09:13.105: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:09:23.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6707" for this suite. • [SLOW TEST:13.137 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":262,"skipped":4288,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:09:23.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 01:09:24.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-607198ee-c468-4eb8-a19a-3a239c893ae4" in namespace "projected-1278" to be "Succeeded or Failed" May 16 01:09:24.140: INFO: Pod "downwardapi-volume-607198ee-c468-4eb8-a19a-3a239c893ae4": Phase="Pending", Reason="", readiness=false. Elapsed: 66.389165ms May 16 01:09:26.146: INFO: Pod "downwardapi-volume-607198ee-c468-4eb8-a19a-3a239c893ae4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071861402s May 16 01:09:28.150: INFO: Pod "downwardapi-volume-607198ee-c468-4eb8-a19a-3a239c893ae4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076223708s STEP: Saw pod success May 16 01:09:28.150: INFO: Pod "downwardapi-volume-607198ee-c468-4eb8-a19a-3a239c893ae4" satisfied condition "Succeeded or Failed" May 16 01:09:28.153: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-607198ee-c468-4eb8-a19a-3a239c893ae4 container client-container: STEP: delete the pod May 16 01:09:28.226: INFO: Waiting for pod downwardapi-volume-607198ee-c468-4eb8-a19a-3a239c893ae4 to disappear May 16 01:09:28.236: INFO: Pod downwardapi-volume-607198ee-c468-4eb8-a19a-3a239c893ae4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:09:28.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1278" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":263,"skipped":4301,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:09:28.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 16 01:09:32.890: INFO: Successfully updated pod "annotationupdate2f4f529f-c80e-4968-afc8-6dc05abb2ceb" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:09:34.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2501" for this suite. • [SLOW TEST:6.678 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":264,"skipped":4318,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:09:34.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 01:09:35.016: INFO: Creating deployment "test-recreate-deployment" May 16 01:09:35.020: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 16 01:09:35.084: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 16 01:09:37.092: INFO: Waiting deployment "test-recreate-deployment" to complete May 16 01:09:37.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188175, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188175, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188175, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188175, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 01:09:39.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188175, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188175, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188175, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188175, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 01:09:41.099: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 16 01:09:41.107: INFO: Updating deployment test-recreate-deployment May 16 01:09:41.107: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 16 01:09:42.278: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-730 /apis/apps/v1/namespaces/deployment-730/deployments/test-recreate-deployment 7db747b3-76ea-452c-96aa-f5e3ee28ed9a 5028895 2 2020-05-16 01:09:35 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-16 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-16 01:09:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e28e18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-16 01:09:41 +0000 UTC,LastTransitionTime:2020-05-16 01:09:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-16 01:09:42 +0000 UTC,LastTransitionTime:2020-05-16 01:09:35 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 16 01:09:42.288: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-730 /apis/apps/v1/namespaces/deployment-730/replicasets/test-recreate-deployment-d5667d9c7 c3740528-4b95-4bae-8980-9f6d3f0f37f6 5028892 1 2020-05-16 01:09:41 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 7db747b3-76ea-452c-96aa-f5e3ee28ed9a 0xc0031c74f0 0xc0031c74f1}] [] [{kube-controller-manager Update apps/v1 2020-05-16 01:09:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7db747b3-76ea-452c-96aa-f5e3ee28ed9a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031c7588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 01:09:42.288: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 16 01:09:42.288: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-730 /apis/apps/v1/namespaces/deployment-730/replicasets/test-recreate-deployment-6d65b9f6d8 4780c93a-974c-4854-90b8-a3bf7b81efaa 5028879 2 2020-05-16 01:09:35 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 7db747b3-76ea-452c-96aa-f5e3ee28ed9a 0xc0031c73d7 0xc0031c73d8}] [] [{kube-controller-manager Update apps/v1 2020-05-16 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7db747b3-76ea-452c-96aa-f5e3ee28ed9a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031c7478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 01:09:42.291: INFO: Pod "test-recreate-deployment-d5667d9c7-pnkq9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-pnkq9 test-recreate-deployment-d5667d9c7- deployment-730 /api/v1/namespaces/deployment-730/pods/test-recreate-deployment-d5667d9c7-pnkq9 342527c9-f64a-48c2-8f57-4e57290bf16b 5028891 0 2020-05-16 01:09:41 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 c3740528-4b95-4bae-8980-9f6d3f0f37f6 0xc0031c7b70 0xc0031c7b71}] [] [{kube-controller-manager Update v1 2020-05-16 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3740528-4b95-4bae-8980-9f6d3f0f37f6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 01:09:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5xnqk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5xnqk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5xnqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:09:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:09:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:09:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 01:09:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-16 01:09:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:09:42.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-730" for this suite. • [SLOW TEST:7.377 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":265,"skipped":4335,"failed":0} [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:09:42.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-28e451b8-c149-4e86-9cf4-6ef3852774ef STEP: Creating configMap with name cm-test-opt-upd-b768795e-4337-4dd0-aedb-3af2ae9dae1f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-28e451b8-c149-4e86-9cf4-6ef3852774ef STEP: Updating configmap cm-test-opt-upd-b768795e-4337-4dd0-aedb-3af2ae9dae1f STEP: Creating configMap with name cm-test-opt-create-5c166c31-d398-4f00-8c73-2f6766d00336 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:09:52.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6428" for this suite. • [SLOW TEST:10.317 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":266,"skipped":4335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:09:52.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 16 01:09:52.670: INFO: namespace kubectl-1249 May 16 01:09:52.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1249' May 16 01:09:53.026: INFO: stderr: "" May 16 01:09:53.026: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 16 01:09:54.031: INFO: Selector matched 1 pods for map[app:agnhost] May 16 01:09:54.031: INFO: Found 0 / 1 May 16 01:09:55.381: INFO: Selector matched 1 pods for map[app:agnhost] May 16 01:09:55.381: INFO: Found 0 / 1 May 16 01:09:56.030: INFO: Selector matched 1 pods for map[app:agnhost] May 16 01:09:56.030: INFO: Found 0 / 1 May 16 01:09:57.038: INFO: Selector matched 1 pods for map[app:agnhost] May 16 01:09:57.038: INFO: Found 1 / 1 May 16 01:09:57.038: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 16 01:09:57.041: INFO: Selector matched 1 pods for map[app:agnhost] May 16 01:09:57.041: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 16 01:09:57.041: INFO: wait on agnhost-master startup in kubectl-1249 May 16 01:09:57.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-pp2c2 agnhost-master --namespace=kubectl-1249' May 16 01:09:57.167: INFO: stderr: "" May 16 01:09:57.167: INFO: stdout: "Paused\n" STEP: exposing RC May 16 01:09:57.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1249' May 16 01:09:57.328: INFO: stderr: "" May 16 01:09:57.328: INFO: stdout: "service/rm2 exposed\n" May 16 01:09:57.335: INFO: Service rm2 in namespace kubectl-1249 found. STEP: exposing service May 16 01:09:59.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1249' May 16 01:09:59.649: INFO: stderr: "" May 16 01:09:59.649: INFO: stdout: "service/rm3 exposed\n" May 16 01:09:59.955: INFO: Service rm3 in namespace kubectl-1249 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:10:01.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1249" for this suite. • [SLOW TEST:9.353 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":267,"skipped":4366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:10:01.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 01:10:02.067: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 16 01:10:05.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8767 create -f -' May 16 01:10:09.150: INFO: stderr: "" May 16 01:10:09.150: INFO: stdout: "e2e-test-crd-publish-openapi-7158-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 16 01:10:09.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8767 delete e2e-test-crd-publish-openapi-7158-crds test-cr' May 16 01:10:09.251: INFO: stderr: "" May 16 01:10:09.251: INFO: stdout: "e2e-test-crd-publish-openapi-7158-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 16 01:10:09.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8767 apply -f -' May 16 01:10:09.489: INFO: stderr: "" May 16 01:10:09.489: INFO: stdout: "e2e-test-crd-publish-openapi-7158-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 16 01:10:09.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8767 delete e2e-test-crd-publish-openapi-7158-crds test-cr' May 16 01:10:09.594: INFO: stderr: "" May 16 01:10:09.594: INFO: stdout: "e2e-test-crd-publish-openapi-7158-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 16 01:10:09.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7158-crds' May 16 01:10:09.845: INFO: stderr: "" May 16 01:10:09.845: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7158-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:10:12.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8767" for this suite. • [SLOW TEST:10.826 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":268,"skipped":4411,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:10:12.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 16 01:10:12.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 16 01:10:13.059: INFO: stderr: "" May 16 01:10:13.060: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:10:13.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4480" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":269,"skipped":4428,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:10:13.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 16 01:10:13.183: INFO: Waiting up to 5m0s for pod "client-containers-53102840-5745-4404-893c-fc45ffabcc29" in namespace "containers-3449" to be "Succeeded or Failed" May 16 01:10:13.188: INFO: Pod "client-containers-53102840-5745-4404-893c-fc45ffabcc29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.520331ms May 16 01:10:15.218: INFO: Pod "client-containers-53102840-5745-4404-893c-fc45ffabcc29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035288696s May 16 01:10:17.222: INFO: Pod "client-containers-53102840-5745-4404-893c-fc45ffabcc29": Phase="Running", Reason="", readiness=true. Elapsed: 4.038630302s May 16 01:10:19.226: INFO: Pod "client-containers-53102840-5745-4404-893c-fc45ffabcc29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042653009s STEP: Saw pod success May 16 01:10:19.226: INFO: Pod "client-containers-53102840-5745-4404-893c-fc45ffabcc29" satisfied condition "Succeeded or Failed" May 16 01:10:19.228: INFO: Trying to get logs from node latest-worker2 pod client-containers-53102840-5745-4404-893c-fc45ffabcc29 container test-container: STEP: delete the pod May 16 01:10:19.246: INFO: Waiting for pod client-containers-53102840-5745-4404-893c-fc45ffabcc29 to disappear May 16 01:10:19.250: INFO: Pod client-containers-53102840-5745-4404-893c-fc45ffabcc29 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:10:19.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3449" for this suite. • [SLOW TEST:6.188 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":270,"skipped":4439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:10:19.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 16 01:10:19.404: INFO: Waiting up to 5m0s for pod "pod-50274e68-b7b0-4a73-9026-cee1b483bad8" in namespace "emptydir-3834" to be "Succeeded or Failed" May 16 01:10:19.424: INFO: Pod "pod-50274e68-b7b0-4a73-9026-cee1b483bad8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.470207ms May 16 01:10:21.429: INFO: Pod "pod-50274e68-b7b0-4a73-9026-cee1b483bad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024907166s May 16 01:10:23.432: INFO: Pod "pod-50274e68-b7b0-4a73-9026-cee1b483bad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028310254s STEP: Saw pod success May 16 01:10:23.432: INFO: Pod "pod-50274e68-b7b0-4a73-9026-cee1b483bad8" satisfied condition "Succeeded or Failed" May 16 01:10:23.435: INFO: Trying to get logs from node latest-worker pod pod-50274e68-b7b0-4a73-9026-cee1b483bad8 container test-container: STEP: delete the pod May 16 01:10:23.520: INFO: Waiting for pod pod-50274e68-b7b0-4a73-9026-cee1b483bad8 to disappear May 16 01:10:23.532: INFO: Pod pod-50274e68-b7b0-4a73-9026-cee1b483bad8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:10:23.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3834" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":271,"skipped":4478,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:10:23.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 01:10:24.303: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 01:10:26.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188224, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188224, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188224, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188224, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 01:10:28.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188224, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188224, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188224, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188224, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 01:10:31.380: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:10:31.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5026" for this suite. STEP: Destroying namespace "webhook-5026-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.104 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":272,"skipped":4486,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:10:31.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-a7ef1f96-8054-4b8a-9cae-fca5b8494cc9 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:10:31.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4554" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":273,"skipped":4489,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:10:31.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 16 01:10:31.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 16 01:10:32.011: INFO: stderr: "" May 16 01:10:32.011: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:10:32.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4786" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":274,"skipped":4507,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:10:32.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-546427e7-f21a-45a1-a41c-afcf422bf275 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-546427e7-f21a-45a1-a41c-afcf422bf275 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:10:38.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5249" for this suite. • [SLOW TEST:6.416 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:10:38.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-78ac305b-43a4-4491-a293-1185991ecd8e in namespace container-probe-5980 May 16 01:10:42.548: INFO: Started pod busybox-78ac305b-43a4-4491-a293-1185991ecd8e in namespace container-probe-5980 STEP: checking the pod's current state and verifying that restartCount is present May 16 01:10:42.552: INFO: Initial restart count of pod busybox-78ac305b-43a4-4491-a293-1185991ecd8e is 0 May 16 01:11:28.719: INFO: Restart count of pod container-probe-5980/busybox-78ac305b-43a4-4491-a293-1185991ecd8e is now 1 (46.167738709s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:11:28.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5980" for this suite. • [SLOW TEST:50.355 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":276,"skipped":4567,"failed":0} [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:11:28.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 01:11:28.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14b5f7bc-5efd-4d16-b9d7-8eee03d05c13" in namespace "projected-1978" to be "Succeeded or Failed" May 16 01:11:29.147: INFO: Pod "downwardapi-volume-14b5f7bc-5efd-4d16-b9d7-8eee03d05c13": Phase="Pending", Reason="", readiness=false. Elapsed: 239.542127ms May 16 01:11:31.151: INFO: Pod "downwardapi-volume-14b5f7bc-5efd-4d16-b9d7-8eee03d05c13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243710386s May 16 01:11:33.155: INFO: Pod "downwardapi-volume-14b5f7bc-5efd-4d16-b9d7-8eee03d05c13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.247960912s STEP: Saw pod success May 16 01:11:33.155: INFO: Pod "downwardapi-volume-14b5f7bc-5efd-4d16-b9d7-8eee03d05c13" satisfied condition "Succeeded or Failed" May 16 01:11:33.159: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-14b5f7bc-5efd-4d16-b9d7-8eee03d05c13 container client-container: STEP: delete the pod May 16 01:11:33.342: INFO: Waiting for pod downwardapi-volume-14b5f7bc-5efd-4d16-b9d7-8eee03d05c13 to disappear May 16 01:11:33.414: INFO: Pod downwardapi-volume-14b5f7bc-5efd-4d16-b9d7-8eee03d05c13 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:11:33.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1978" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4567,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:11:33.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8112.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8112.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8112.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8112.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8112.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8112.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 01:11:39.625: INFO: DNS probes using dns-8112/dns-test-13506eac-60b6-44a6-bf22-addce55f0b79 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:11:39.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8112" for this suite. • [SLOW TEST:6.206 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":278,"skipped":4580,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:11:39.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 01:11:40.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 01:11:42.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188300, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188300, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188300, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188300, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 01:11:44.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188300, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188300, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188300, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725188300, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 01:11:47.728: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:11:48.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1834" for this suite. STEP: Destroying namespace "webhook-1834-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.765 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":279,"skipped":4586,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:11:48.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 16 01:11:48.541: INFO: Waiting up to 5m0s for pod "pod-d5995dd5-dddc-47d0-bf15-543750f44b23" in namespace "emptydir-2017" to be "Succeeded or Failed" May 16 01:11:48.596: INFO: Pod "pod-d5995dd5-dddc-47d0-bf15-543750f44b23": Phase="Pending", Reason="", readiness=false. Elapsed: 55.53833ms May 16 01:11:50.600: INFO: Pod "pod-d5995dd5-dddc-47d0-bf15-543750f44b23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059314563s May 16 01:11:52.605: INFO: Pod "pod-d5995dd5-dddc-47d0-bf15-543750f44b23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064372255s STEP: Saw pod success May 16 01:11:52.605: INFO: Pod "pod-d5995dd5-dddc-47d0-bf15-543750f44b23" satisfied condition "Succeeded or Failed" May 16 01:11:52.608: INFO: Trying to get logs from node latest-worker pod pod-d5995dd5-dddc-47d0-bf15-543750f44b23 container test-container: STEP: delete the pod May 16 01:11:52.699: INFO: Waiting for pod pod-d5995dd5-dddc-47d0-bf15-543750f44b23 to disappear May 16 01:11:52.706: INFO: Pod pod-d5995dd5-dddc-47d0-bf15-543750f44b23 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:11:52.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2017" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":280,"skipped":4589,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:11:52.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:12:05.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9868" for this suite. • [SLOW TEST:13.270 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":281,"skipped":4594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:12:05.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 16 01:12:06.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-371' May 16 01:12:06.366: INFO: stderr: "" May 16 01:12:06.366: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 01:12:06.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-371' May 16 01:12:06.476: INFO: stderr: "" May 16 01:12:06.476: INFO: stdout: "update-demo-nautilus-csdh9 update-demo-nautilus-qw7xv " May 16 01:12:06.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-csdh9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-371' May 16 01:12:06.562: INFO: stderr: "" May 16 01:12:06.562: INFO: stdout: "" May 16 01:12:06.562: INFO: update-demo-nautilus-csdh9 is created but not running May 16 01:12:11.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-371' May 16 01:12:11.705: INFO: stderr: "" May 16 01:12:11.705: INFO: stdout: "update-demo-nautilus-csdh9 update-demo-nautilus-qw7xv " May 16 01:12:11.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-csdh9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-371' May 16 01:12:11.810: INFO: stderr: "" May 16 01:12:11.810: INFO: stdout: "true" May 16 01:12:11.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-csdh9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-371' May 16 01:12:11.903: INFO: stderr: "" May 16 01:12:11.903: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 01:12:11.903: INFO: validating pod update-demo-nautilus-csdh9 May 16 01:12:11.907: INFO: got data: { "image": "nautilus.jpg" } May 16 01:12:11.907: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 01:12:11.907: INFO: update-demo-nautilus-csdh9 is verified up and running May 16 01:12:11.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qw7xv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-371' May 16 01:12:12.008: INFO: stderr: "" May 16 01:12:12.008: INFO: stdout: "true" May 16 01:12:12.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qw7xv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-371' May 16 01:12:12.103: INFO: stderr: "" May 16 01:12:12.103: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 01:12:12.103: INFO: validating pod update-demo-nautilus-qw7xv May 16 01:12:12.108: INFO: got data: { "image": "nautilus.jpg" } May 16 01:12:12.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 01:12:12.108: INFO: update-demo-nautilus-qw7xv is verified up and running STEP: using delete to clean up resources May 16 01:12:12.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-371' May 16 01:12:12.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 01:12:12.217: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 16 01:12:12.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-371' May 16 01:12:12.332: INFO: stderr: "No resources found in kubectl-371 namespace.\n" May 16 01:12:12.332: INFO: stdout: "" May 16 01:12:12.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-371 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 01:12:12.440: INFO: stderr: "" May 16 01:12:12.440: INFO: stdout: "update-demo-nautilus-csdh9\nupdate-demo-nautilus-qw7xv\n" May 16 01:12:12.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-371' May 16 01:12:13.039: INFO: stderr: "No resources found in kubectl-371 namespace.\n" May 16 01:12:13.039: INFO: stdout: "" May 16 01:12:13.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-371 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 01:12:13.136: INFO: stderr: "" May 16 01:12:13.136: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:12:13.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-371" for this suite. • [SLOW TEST:7.159 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":282,"skipped":4691,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:12:13.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 16 01:12:13.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2238' May 16 01:12:13.475: INFO: stderr: "" May 16 01:12:13.475: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 16 01:12:13.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2238' May 16 01:12:18.573: INFO: stderr: "" May 16 01:12:18.573: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:12:18.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2238" for this suite. • [SLOW TEST:5.445 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":283,"skipped":4701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:12:18.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 16 01:12:18.641: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 16 01:12:18.647: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 16 01:12:18.647: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 16 01:12:18.654: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 16 01:12:18.654: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 16 01:12:18.720: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 16 01:12:18.720: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 16 01:12:26.159: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:12:26.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5857" for this suite. • [SLOW TEST:7.639 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":284,"skipped":4756,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:12:26.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:12:42.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7603" for this suite. • [SLOW TEST:16.409 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":285,"skipped":4774,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:12:42.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 16 01:12:47.770: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:12:47.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4836" for this suite. • [SLOW TEST:5.251 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":286,"skipped":4781,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:12:47.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:12:55.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3656" for this suite. • [SLOW TEST:7.123 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":287,"skipped":4783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 01:12:55.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 16 01:12:55.105: INFO: Waiting up to 5m0s for pod "pod-44d6d6ba-49b8-4182-a12c-597af31f93a4" in namespace "emptydir-3172" to be "Succeeded or Failed" May 16 01:12:55.117: INFO: Pod "pod-44d6d6ba-49b8-4182-a12c-597af31f93a4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.2201ms May 16 01:12:57.122: INFO: Pod "pod-44d6d6ba-49b8-4182-a12c-597af31f93a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016367764s May 16 01:12:59.124: INFO: Pod "pod-44d6d6ba-49b8-4182-a12c-597af31f93a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019083205s STEP: Saw pod success May 16 01:12:59.124: INFO: Pod "pod-44d6d6ba-49b8-4182-a12c-597af31f93a4" satisfied condition "Succeeded or Failed" May 16 01:12:59.127: INFO: Trying to get logs from node latest-worker2 pod pod-44d6d6ba-49b8-4182-a12c-597af31f93a4 container test-container: STEP: delete the pod May 16 01:12:59.155: INFO: Waiting for pod pod-44d6d6ba-49b8-4182-a12c-597af31f93a4 to disappear May 16 01:12:59.159: INFO: Pod pod-44d6d6ba-49b8-4182-a12c-597af31f93a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 01:12:59.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3172" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4807,"failed":0} May 16 01:12:59.170: INFO: Running AfterSuite actions on all nodes May 16 01:12:59.170: INFO: Running AfterSuite actions on node 1 May 16 01:12:59.170: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5671.666 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS