I0506 23:57:56.004603 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0506 23:57:56.004779 7 e2e.go:129] Starting e2e run "a2dd0c1c-d924-4644-9fa8-07db2b7bfd4f" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588809474 - Will randomize all specs Will run 288 of 5095 specs May 6 23:57:56.059: INFO: >>> kubeConfig: /root/.kube/config May 6 23:57:56.062: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 23:57:56.082: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 23:57:56.118: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 23:57:56.118: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 6 23:57:56.118: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 23:57:56.128: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 6 23:57:56.128: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 23:57:56.128: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 6 23:57:56.128: INFO: kube-apiserver version: v1.18.2 May 6 23:57:56.129: INFO: >>> kubeConfig: /root/.kube/config May 6 23:57:56.132: INFO: Cluster IP family: ipv4 [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 23:57:56.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook May 6 23:57:56.540: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:57:57.151: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:57:59.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406277, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406277, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406277, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406277, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:58:02.804: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 23:58:02.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1101-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 23:58:04.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1645" for this suite. STEP: Destroying namespace "webhook-1645-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.743 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":1,"skipped":0,"failed":0} SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 23:58:04.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 23:58:04.951: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6566 I0506 23:58:04.962843 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6566, replica count: 1 I0506 23:58:06.013533 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:58:07.013733 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:58:08.014018 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:58:09.014213 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:58:10.014406 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 23:58:10.256: INFO: Created: latency-svc-j8gmj May 6 23:58:10.264: INFO: Got endpoints: latency-svc-j8gmj [149.40916ms] May 6 23:58:10.454: INFO: Created: latency-svc-7k6bj May 6 23:58:10.458: INFO: Got endpoints: latency-svc-7k6bj [194.609965ms] May 6 23:58:10.537: INFO: Created: latency-svc-gsnvx May 6 23:58:10.598: INFO: Got endpoints: latency-svc-gsnvx [333.915416ms] May 6 23:58:10.655: INFO: Created: latency-svc-f9sng May 6 23:58:10.675: INFO: Got endpoints: latency-svc-f9sng [411.317562ms] May 6 23:58:10.807: INFO: Created: latency-svc-dp4nm May 6 23:58:10.810: INFO: Got endpoints: latency-svc-dp4nm [546.23366ms] May 6 23:58:10.891: INFO: Created: latency-svc-r9cjk May 6 23:58:10.999: INFO: Got endpoints: latency-svc-r9cjk [734.995894ms] May 6 23:58:11.562: INFO: Created: latency-svc-5jjlc May 6 23:58:11.590: INFO: Got endpoints: latency-svc-5jjlc [1.325551772s] May 6 23:58:11.808: INFO: Created: latency-svc-945z2 May 6 23:58:11.845: INFO: Got endpoints: latency-svc-945z2 [1.581531463s] May 6 23:58:12.034: INFO: Created: latency-svc-lk9np May 6 23:58:12.060: INFO: Got endpoints: latency-svc-lk9np [1.795948975s] May 6 23:58:12.729: INFO: Created: latency-svc-vw472 May 6 23:58:12.792: INFO: Got endpoints: latency-svc-vw472 [2.528249327s] May 6 23:58:12.823: INFO: Created: latency-svc-qpc9r May 6 23:58:12.880: INFO: Got endpoints: latency-svc-qpc9r [2.615850644s] May 6 23:58:12.941: INFO: Created: latency-svc-6pgr6 May 6 23:58:12.961: INFO: Got endpoints: latency-svc-6pgr6 [2.696915787s] May 6 23:58:13.100: INFO: Created: latency-svc-bkt5b May 6 23:58:13.104: INFO: Got endpoints: latency-svc-bkt5b [2.840306271s] May 6 23:58:13.328: INFO: Created: latency-svc-k6f87 May 6 23:58:13.343: INFO: Got endpoints: latency-svc-k6f87 [3.079446575s] May 6 23:58:13.381: INFO: Created: latency-svc-6zlt2 May 6 23:58:13.416: INFO: Got endpoints: latency-svc-6zlt2 [3.151747389s] May 6 23:58:13.501: INFO: Created: latency-svc-6t2n6 May 6 23:58:13.523: INFO: Got endpoints: latency-svc-6t2n6 [3.259511005s] May 6 23:58:13.571: INFO: Created: latency-svc-wxrcb May 6 23:58:13.578: INFO: Got endpoints: latency-svc-wxrcb [3.119495611s] May 6 23:58:13.651: INFO: Created: latency-svc-kj44j May 6 23:58:13.680: INFO: Got endpoints: latency-svc-kj44j [3.082638686s] May 6 23:58:13.745: INFO: Created: latency-svc-9zdpl May 6 23:58:13.838: INFO: Got endpoints: latency-svc-9zdpl [3.162938222s] May 6 23:58:13.840: INFO: Created: latency-svc-2cqp5 May 6 23:58:13.872: INFO: Got endpoints: latency-svc-2cqp5 [3.062120561s] May 6 23:58:13.914: INFO: Created: latency-svc-gzct5 May 6 23:58:13.933: INFO: Got endpoints: latency-svc-gzct5 [2.933988688s] May 6 23:58:14.022: INFO: Created: latency-svc-4c2kl May 6 23:58:14.051: INFO: Got endpoints: latency-svc-4c2kl [2.46126737s] May 6 23:58:14.088: INFO: Created: latency-svc-zmgsm May 6 23:58:14.150: INFO: Got endpoints: latency-svc-zmgsm [2.304306324s] May 6 23:58:14.166: INFO: Created: latency-svc-7zb2d May 6 23:58:14.204: INFO: Got endpoints: latency-svc-7zb2d [2.143938298s] May 6 23:58:14.244: INFO: Created: latency-svc-whw4s May 6 23:58:14.310: INFO: Got endpoints: latency-svc-whw4s [1.517482394s] May 6 23:58:14.369: INFO: Created: latency-svc-qbvkq May 6 23:58:14.401: INFO: Got endpoints: latency-svc-qbvkq [1.521716943s] May 6 23:58:14.486: INFO: Created: latency-svc-2xglz May 6 23:58:14.497: INFO: Got endpoints: latency-svc-2xglz [1.536680954s] May 6 23:58:14.561: INFO: Created: latency-svc-876n5 May 6 23:58:14.575: INFO: Got endpoints: latency-svc-876n5 [1.470570882s] May 6 23:58:14.627: INFO: Created: latency-svc-c25m7 May 6 23:58:14.659: INFO: Got endpoints: latency-svc-c25m7 [1.315222194s] May 6 23:58:14.731: INFO: Created: latency-svc-lxrm5 May 6 23:58:14.807: INFO: Got endpoints: latency-svc-lxrm5 [1.391372606s] May 6 23:58:14.810: INFO: Created: latency-svc-2crsr May 6 23:58:14.840: INFO: Got endpoints: latency-svc-2crsr [1.316341256s] May 6 23:58:14.886: INFO: Created: latency-svc-mfmsv May 6 23:58:14.945: INFO: Got endpoints: latency-svc-mfmsv [1.367542606s] May 6 23:58:15.000: INFO: Created: latency-svc-j7hkl May 6 23:58:15.015: INFO: Got endpoints: latency-svc-j7hkl [1.334314168s] May 6 23:58:15.136: INFO: Created: latency-svc-zk2mz May 6 23:58:15.179: INFO: Got endpoints: latency-svc-zk2mz [1.34118051s] May 6 23:58:15.182: INFO: Created: latency-svc-pwmz9 May 6 23:58:15.203: INFO: Got endpoints: latency-svc-pwmz9 [1.330615382s] May 6 23:58:15.234: INFO: Created: latency-svc-zw85j May 6 23:58:15.319: INFO: Got endpoints: latency-svc-zw85j [1.385773392s] May 6 23:58:15.329: INFO: Created: latency-svc-9q569 May 6 23:58:15.351: INFO: Got endpoints: latency-svc-9q569 [1.299838752s] May 6 23:58:15.384: INFO: Created: latency-svc-wbv2w May 6 23:58:15.405: INFO: Got endpoints: latency-svc-wbv2w [1.255624613s] May 6 23:58:15.525: INFO: Created: latency-svc-tgs9k May 6 23:58:15.539: INFO: Got endpoints: latency-svc-tgs9k [1.334999309s] May 6 23:58:15.589: INFO: Created: latency-svc-hc2mb May 6 23:58:15.604: INFO: Got endpoints: latency-svc-hc2mb [1.294087719s] May 6 23:58:15.683: INFO: Created: latency-svc-5jvrg May 6 23:58:15.708: INFO: Got endpoints: latency-svc-5jvrg [1.306685083s] May 6 23:58:15.744: INFO: Created: latency-svc-564wb May 6 23:58:15.760: INFO: Got endpoints: latency-svc-564wb [1.262663164s] May 6 23:58:15.849: INFO: Created: latency-svc-dtg62 May 6 23:58:15.888: INFO: Got endpoints: latency-svc-dtg62 [1.312635771s] May 6 23:58:15.894: INFO: Created: latency-svc-ljqzl May 6 23:58:15.898: INFO: Got endpoints: latency-svc-ljqzl [1.239527825s] May 6 23:58:15.947: INFO: Created: latency-svc-tqb5r May 6 23:58:16.016: INFO: Got endpoints: latency-svc-tqb5r [1.209155195s] May 6 23:58:16.022: INFO: Created: latency-svc-vs8v8 May 6 23:58:16.031: INFO: Got endpoints: latency-svc-vs8v8 [1.1910467s] May 6 23:58:16.085: INFO: Created: latency-svc-8l2t5 May 6 23:58:16.109: INFO: Got endpoints: latency-svc-8l2t5 [1.163958041s] May 6 23:58:16.208: INFO: Created: latency-svc-lskg9 May 6 23:58:16.226: INFO: Got endpoints: latency-svc-lskg9 [1.21128614s] May 6 23:58:16.284: INFO: Created: latency-svc-tznwd May 6 23:58:16.442: INFO: Got endpoints: latency-svc-tznwd [1.262888946s] May 6 23:58:16.444: INFO: Created: latency-svc-47mzc May 6 23:58:16.487: INFO: Got endpoints: latency-svc-47mzc [1.283696314s] May 6 23:58:16.523: INFO: Created: latency-svc-wd9qq May 6 23:58:16.536: INFO: Got endpoints: latency-svc-wd9qq [1.217506283s] May 6 23:58:16.615: INFO: Created: latency-svc-9mxxj May 6 23:58:16.651: INFO: Got endpoints: latency-svc-9mxxj [1.299628758s] May 6 23:58:16.651: INFO: Created: latency-svc-h7967 May 6 23:58:16.669: INFO: Got endpoints: latency-svc-h7967 [1.26301903s] May 6 23:58:16.697: INFO: Created: latency-svc-dg6sj May 6 23:58:16.783: INFO: Got endpoints: latency-svc-dg6sj [1.244094874s] May 6 23:58:16.817: INFO: Created: latency-svc-qtf7b May 6 23:58:16.838: INFO: Got endpoints: latency-svc-qtf7b [1.233626439s] May 6 23:58:16.934: INFO: Created: latency-svc-6cb2s May 6 23:58:16.939: INFO: Got endpoints: latency-svc-6cb2s [1.230969596s] May 6 23:58:16.976: INFO: Created: latency-svc-ll5tr May 6 23:58:17.005: INFO: Got endpoints: latency-svc-ll5tr [1.245026108s] May 6 23:58:17.118: INFO: Created: latency-svc-cxqw8 May 6 23:58:17.160: INFO: Got endpoints: latency-svc-cxqw8 [1.272030549s] May 6 23:58:17.161: INFO: Created: latency-svc-q2dlg May 6 23:58:17.201: INFO: Got endpoints: latency-svc-q2dlg [1.302953446s] May 6 23:58:17.292: INFO: Created: latency-svc-hfnqs May 6 23:58:17.323: INFO: Got endpoints: latency-svc-hfnqs [1.306444269s] May 6 23:58:17.325: INFO: Created: latency-svc-mmkt7 May 6 23:58:17.351: INFO: Got endpoints: latency-svc-mmkt7 [1.320483982s] May 6 23:58:17.388: INFO: Created: latency-svc-4nbgp May 6 23:58:17.480: INFO: Got endpoints: latency-svc-4nbgp [1.370709161s] May 6 23:58:17.482: INFO: Created: latency-svc-qzkxh May 6 23:58:17.514: INFO: Got endpoints: latency-svc-qzkxh [1.288357372s] May 6 23:58:17.579: INFO: Created: latency-svc-rq6hv May 6 23:58:17.682: INFO: Got endpoints: latency-svc-rq6hv [1.239316309s] May 6 23:58:17.686: INFO: Created: latency-svc-8pczz May 6 23:58:17.691: INFO: Got endpoints: latency-svc-8pczz [1.204024023s] May 6 23:58:17.737: INFO: Created: latency-svc-5z5rn May 6 23:58:17.769: INFO: Got endpoints: latency-svc-5z5rn [1.233236978s] May 6 23:58:17.862: INFO: Created: latency-svc-vv28t May 6 23:58:17.911: INFO: Got endpoints: latency-svc-vv28t [1.260128398s] May 6 23:58:17.913: INFO: Created: latency-svc-p27mt May 6 23:58:18.082: INFO: Got endpoints: latency-svc-p27mt [1.413792755s] May 6 23:58:18.092: INFO: Created: latency-svc-9dvq2 May 6 23:58:18.106: INFO: Got endpoints: latency-svc-9dvq2 [1.323203559s] May 6 23:58:18.150: INFO: Created: latency-svc-854vf May 6 23:58:18.173: INFO: Got endpoints: latency-svc-854vf [1.335465306s] May 6 23:58:18.238: INFO: Created: latency-svc-6fzpc May 6 23:58:18.269: INFO: Created: latency-svc-k2c9h May 6 23:58:18.269: INFO: Got endpoints: latency-svc-6fzpc [1.329777413s] May 6 23:58:18.299: INFO: Got endpoints: latency-svc-k2c9h [1.293774741s] May 6 23:58:18.330: INFO: Created: latency-svc-rr5fx May 6 23:58:18.400: INFO: Got endpoints: latency-svc-rr5fx [1.239753054s] May 6 23:58:18.402: INFO: Created: latency-svc-fbmb9 May 6 23:58:18.431: INFO: Got endpoints: latency-svc-fbmb9 [1.230129149s] May 6 23:58:18.485: INFO: Created: latency-svc-vk4bq May 6 23:58:18.562: INFO: Got endpoints: latency-svc-vk4bq [1.238596931s] May 6 23:58:18.564: INFO: Created: latency-svc-4cbmf May 6 23:58:18.576: INFO: Got endpoints: latency-svc-4cbmf [1.224071791s] May 6 23:58:18.638: INFO: Created: latency-svc-z6wwc May 6 23:58:18.654: INFO: Got endpoints: latency-svc-z6wwc [1.173809368s] May 6 23:58:18.753: INFO: Created: latency-svc-vx85p May 6 23:58:18.786: INFO: Created: latency-svc-jk5v8 May 6 23:58:18.786: INFO: Got endpoints: latency-svc-vx85p [1.271900944s] May 6 23:58:18.835: INFO: Got endpoints: latency-svc-jk5v8 [1.153767712s] May 6 23:58:18.915: INFO: Created: latency-svc-g6sx5 May 6 23:58:18.925: INFO: Got endpoints: latency-svc-g6sx5 [1.233887316s] May 6 23:58:18.959: INFO: Created: latency-svc-hxb59 May 6 23:58:18.985: INFO: Got endpoints: latency-svc-hxb59 [1.215876396s] May 6 23:58:19.016: INFO: Created: latency-svc-j94ck May 6 23:58:19.107: INFO: Got endpoints: latency-svc-j94ck [1.195862841s] May 6 23:58:19.110: INFO: Created: latency-svc-86s8d May 6 23:58:19.124: INFO: Got endpoints: latency-svc-86s8d [1.041663265s] May 6 23:58:19.159: INFO: Created: latency-svc-cfz8p May 6 23:58:19.184: INFO: Got endpoints: latency-svc-cfz8p [1.077514696s] May 6 23:58:19.280: INFO: Created: latency-svc-vk655 May 6 23:58:19.284: INFO: Got endpoints: latency-svc-vk655 [1.11084983s] May 6 23:58:19.354: INFO: Created: latency-svc-rj4q4 May 6 23:58:19.370: INFO: Got endpoints: latency-svc-rj4q4 [1.101122993s] May 6 23:58:19.422: INFO: Created: latency-svc-spnxx May 6 23:58:19.430: INFO: Got endpoints: latency-svc-spnxx [1.13143907s] May 6 23:58:19.457: INFO: Created: latency-svc-tdx5q May 6 23:58:19.472: INFO: Got endpoints: latency-svc-tdx5q [1.072659566s] May 6 23:58:19.580: INFO: Created: latency-svc-64b85 May 6 23:58:19.584: INFO: Got endpoints: latency-svc-64b85 [1.152012729s] May 6 23:58:19.650: INFO: Created: latency-svc-l2zmg May 6 23:58:19.665: INFO: Got endpoints: latency-svc-l2zmg [1.103611525s] May 6 23:58:19.753: INFO: Created: latency-svc-jgg9w May 6 23:58:19.756: INFO: Got endpoints: latency-svc-jgg9w [1.180317483s] May 6 23:58:19.795: INFO: Created: latency-svc-7w6pf May 6 23:58:19.810: INFO: Got endpoints: latency-svc-7w6pf [1.155464827s] May 6 23:58:19.831: INFO: Created: latency-svc-j8459 May 6 23:58:19.846: INFO: Got endpoints: latency-svc-j8459 [1.059154473s] May 6 23:58:19.938: INFO: Created: latency-svc-l2r9j May 6 23:58:19.967: INFO: Got endpoints: latency-svc-l2r9j [1.131075832s] May 6 23:58:19.998: INFO: Created: latency-svc-4t5wt May 6 23:58:20.095: INFO: Got endpoints: latency-svc-4t5wt [1.169692043s] May 6 23:58:20.124: INFO: Created: latency-svc-hglmk May 6 23:58:20.158: INFO: Got endpoints: latency-svc-hglmk [1.172936908s] May 6 23:58:20.268: INFO: Created: latency-svc-5lm5d May 6 23:58:20.279: INFO: Got endpoints: latency-svc-5lm5d [1.171866318s] May 6 23:58:20.328: INFO: Created: latency-svc-486d6 May 6 23:58:20.344: INFO: Got endpoints: latency-svc-486d6 [1.220367932s] May 6 23:58:20.418: INFO: Created: latency-svc-f5btl May 6 23:58:20.421: INFO: Got endpoints: latency-svc-f5btl [1.237385628s] May 6 23:58:20.496: INFO: Created: latency-svc-bbqbc May 6 23:58:20.574: INFO: Got endpoints: latency-svc-bbqbc [1.289480261s] May 6 23:58:20.587: INFO: Created: latency-svc-7rl7w May 6 23:58:20.593: INFO: Got endpoints: latency-svc-7rl7w [1.223268471s] May 6 23:58:20.639: INFO: Created: latency-svc-c9qwg May 6 23:58:20.664: INFO: Got endpoints: latency-svc-c9qwg [1.233597479s] May 6 23:58:20.801: INFO: Created: latency-svc-qh9dn May 6 23:58:20.815: INFO: Got endpoints: latency-svc-qh9dn [1.342865943s] May 6 23:58:21.014: INFO: Created: latency-svc-zbfz7 May 6 23:58:21.017: INFO: Got endpoints: latency-svc-zbfz7 [1.43319804s] May 6 23:58:21.083: INFO: Created: latency-svc-bjwr9 May 6 23:58:21.102: INFO: Got endpoints: latency-svc-bjwr9 [1.436649049s] May 6 23:58:21.178: INFO: Created: latency-svc-r8tl9 May 6 23:58:21.222: INFO: Created: latency-svc-hmqvl May 6 23:58:21.222: INFO: Got endpoints: latency-svc-r8tl9 [1.466012821s] May 6 23:58:21.247: INFO: Got endpoints: latency-svc-hmqvl [1.437067285s] May 6 23:58:21.328: INFO: Created: latency-svc-b7kkv May 6 23:58:21.349: INFO: Got endpoints: latency-svc-b7kkv [1.502912795s] May 6 23:58:21.383: INFO: Created: latency-svc-4bnmh May 6 23:58:21.403: INFO: Got endpoints: latency-svc-4bnmh [1.436084027s] May 6 23:58:21.508: INFO: Created: latency-svc-vkmft May 6 23:58:21.511: INFO: Got endpoints: latency-svc-vkmft [1.416722889s] May 6 23:58:21.553: INFO: Created: latency-svc-cs25h May 6 23:58:21.565: INFO: Got endpoints: latency-svc-cs25h [1.406634451s] May 6 23:58:21.593: INFO: Created: latency-svc-rmbqm May 6 23:58:21.706: INFO: Got endpoints: latency-svc-rmbqm [1.427180641s] May 6 23:58:21.743: INFO: Created: latency-svc-w6n9f May 6 23:58:21.777: INFO: Got endpoints: latency-svc-w6n9f [1.431993187s] May 6 23:58:21.887: INFO: Created: latency-svc-djn49 May 6 23:58:21.902: INFO: Got endpoints: latency-svc-djn49 [1.480776672s] May 6 23:58:21.961: INFO: Created: latency-svc-fksqc May 6 23:58:21.974: INFO: Got endpoints: latency-svc-fksqc [1.400305218s] May 6 23:58:22.041: INFO: Created: latency-svc-xwksm May 6 23:58:22.099: INFO: Got endpoints: latency-svc-xwksm [1.505826117s] May 6 23:58:22.204: INFO: Created: latency-svc-95m7g May 6 23:58:22.208: INFO: Got endpoints: latency-svc-95m7g [1.54375606s] May 6 23:58:22.257: INFO: Created: latency-svc-hlqs8 May 6 23:58:22.269: INFO: Got endpoints: latency-svc-hlqs8 [1.453628809s] May 6 23:58:22.302: INFO: Created: latency-svc-r5gkw May 6 23:58:22.394: INFO: Got endpoints: latency-svc-r5gkw [1.376993451s] May 6 23:58:22.405: INFO: Created: latency-svc-nxrb7 May 6 23:58:22.426: INFO: Got endpoints: latency-svc-nxrb7 [1.32357775s] May 6 23:58:22.453: INFO: Created: latency-svc-rpcmx May 6 23:58:22.468: INFO: Got endpoints: latency-svc-rpcmx [1.24553962s] May 6 23:58:22.487: INFO: Created: latency-svc-2xgtg May 6 23:58:22.556: INFO: Got endpoints: latency-svc-2xgtg [1.308800243s] May 6 23:58:22.561: INFO: Created: latency-svc-fgzq6 May 6 23:58:22.570: INFO: Got endpoints: latency-svc-fgzq6 [1.221278269s] May 6 23:58:22.597: INFO: Created: latency-svc-mjw8q May 6 23:58:22.612: INFO: Got endpoints: latency-svc-mjw8q [1.209437005s] May 6 23:58:22.633: INFO: Created: latency-svc-66jwg May 6 23:58:22.649: INFO: Got endpoints: latency-svc-66jwg [1.137669865s] May 6 23:58:22.693: INFO: Created: latency-svc-ks2h8 May 6 23:58:22.696: INFO: Got endpoints: latency-svc-ks2h8 [1.130664091s] May 6 23:58:22.759: INFO: Created: latency-svc-glfhv May 6 23:58:22.775: INFO: Got endpoints: latency-svc-glfhv [1.069027918s] May 6 23:58:22.867: INFO: Created: latency-svc-wzqj5 May 6 23:58:22.870: INFO: Got endpoints: latency-svc-wzqj5 [1.093698275s] May 6 23:58:23.065: INFO: Created: latency-svc-9k725 May 6 23:58:23.087: INFO: Got endpoints: latency-svc-9k725 [1.184480642s] May 6 23:58:23.149: INFO: Created: latency-svc-wtwg4 May 6 23:58:23.246: INFO: Got endpoints: latency-svc-wtwg4 [1.272272082s] May 6 23:58:23.252: INFO: Created: latency-svc-m8978 May 6 23:58:23.261: INFO: Got endpoints: latency-svc-m8978 [1.161871031s] May 6 23:58:23.292: INFO: Created: latency-svc-gssfb May 6 23:58:23.323: INFO: Got endpoints: latency-svc-gssfb [1.11541172s] May 6 23:58:23.424: INFO: Created: latency-svc-tx4b7 May 6 23:58:23.503: INFO: Got endpoints: latency-svc-tx4b7 [1.233574921s] May 6 23:58:23.503: INFO: Created: latency-svc-kg8hq May 6 23:58:23.628: INFO: Got endpoints: latency-svc-kg8hq [1.233895928s] May 6 23:58:24.296: INFO: Created: latency-svc-x7m55 May 6 23:58:24.311: INFO: Got endpoints: latency-svc-x7m55 [1.885601899s] May 6 23:58:24.599: INFO: Created: latency-svc-g9dh9 May 6 23:58:24.777: INFO: Got endpoints: latency-svc-g9dh9 [2.309778358s] May 6 23:58:24.970: INFO: Created: latency-svc-xrsl6 May 6 23:58:24.974: INFO: Got endpoints: latency-svc-xrsl6 [2.418745358s] May 6 23:58:25.143: INFO: Created: latency-svc-5fvjv May 6 23:58:25.146: INFO: Got endpoints: latency-svc-5fvjv [2.575797567s] May 6 23:58:25.401: INFO: Created: latency-svc-4wkxz May 6 23:58:25.412: INFO: Got endpoints: latency-svc-4wkxz [2.799601981s] May 6 23:58:25.448: INFO: Created: latency-svc-db46n May 6 23:58:25.463: INFO: Got endpoints: latency-svc-db46n [2.814123296s] May 6 23:58:25.498: INFO: Created: latency-svc-8sc58 May 6 23:58:25.575: INFO: Got endpoints: latency-svc-8sc58 [2.878617637s] May 6 23:58:25.576: INFO: Created: latency-svc-j4rx4 May 6 23:58:25.590: INFO: Got endpoints: latency-svc-j4rx4 [2.814746306s] May 6 23:58:25.623: INFO: Created: latency-svc-9rzk4 May 6 23:58:25.637: INFO: Got endpoints: latency-svc-9rzk4 [2.766712275s] May 6 23:58:25.772: INFO: Created: latency-svc-qr5wq May 6 23:58:25.800: INFO: Got endpoints: latency-svc-qr5wq [2.712899054s] May 6 23:58:26.362: INFO: Created: latency-svc-kdldt May 6 23:58:26.412: INFO: Got endpoints: latency-svc-kdldt [3.165452671s] May 6 23:58:26.616: INFO: Created: latency-svc-c47wh May 6 23:58:26.843: INFO: Got endpoints: latency-svc-c47wh [3.581569131s] May 6 23:58:26.852: INFO: Created: latency-svc-k4wp2 May 6 23:58:27.072: INFO: Got endpoints: latency-svc-k4wp2 [3.748154573s] May 6 23:58:27.073: INFO: Created: latency-svc-w2txj May 6 23:58:27.112: INFO: Got endpoints: latency-svc-w2txj [3.609625877s] May 6 23:58:27.227: INFO: Created: latency-svc-vps7q May 6 23:58:27.229: INFO: Got endpoints: latency-svc-vps7q [3.601509864s] May 6 23:58:27.302: INFO: Created: latency-svc-p6n4k May 6 23:58:27.400: INFO: Got endpoints: latency-svc-p6n4k [3.088558645s] May 6 23:58:27.448: INFO: Created: latency-svc-xdkvw May 6 23:58:27.479: INFO: Got endpoints: latency-svc-xdkvw [2.701323374s] May 6 23:58:27.575: INFO: Created: latency-svc-nbv2z May 6 23:58:27.611: INFO: Created: latency-svc-xv4wg May 6 23:58:27.611: INFO: Got endpoints: latency-svc-nbv2z [2.636344826s] May 6 23:58:27.639: INFO: Got endpoints: latency-svc-xv4wg [2.492977555s] May 6 23:58:27.672: INFO: Created: latency-svc-7h8sc May 6 23:58:27.760: INFO: Got endpoints: latency-svc-7h8sc [2.347704088s] May 6 23:58:27.772: INFO: Created: latency-svc-2l7s6 May 6 23:58:27.780: INFO: Got endpoints: latency-svc-2l7s6 [2.316372378s] May 6 23:58:27.810: INFO: Created: latency-svc-ks5vc May 6 23:58:27.816: INFO: Got endpoints: latency-svc-ks5vc [2.24131492s] May 6 23:58:27.939: INFO: Created: latency-svc-vqz4r May 6 23:58:27.951: INFO: Got endpoints: latency-svc-vqz4r [2.360824385s] May 6 23:58:27.988: INFO: Created: latency-svc-pcxtn May 6 23:58:28.012: INFO: Got endpoints: latency-svc-pcxtn [2.374826329s] May 6 23:58:28.095: INFO: Created: latency-svc-6jpff May 6 23:58:28.132: INFO: Got endpoints: latency-svc-6jpff [2.332657958s] May 6 23:58:28.135: INFO: Created: latency-svc-p4zjb May 6 23:58:28.263: INFO: Got endpoints: latency-svc-p4zjb [1.851328799s] May 6 23:58:28.280: INFO: Created: latency-svc-rksb5 May 6 23:58:28.293: INFO: Got endpoints: latency-svc-rksb5 [1.450132521s] May 6 23:58:28.361: INFO: Created: latency-svc-xpcb6 May 6 23:58:28.436: INFO: Got endpoints: latency-svc-xpcb6 [1.364194455s] May 6 23:58:28.707: INFO: Created: latency-svc-st522 May 6 23:58:28.837: INFO: Got endpoints: latency-svc-st522 [1.724961233s] May 6 23:58:28.840: INFO: Created: latency-svc-psw9r May 6 23:58:28.905: INFO: Got endpoints: latency-svc-psw9r [1.675553927s] May 6 23:58:28.935: INFO: Created: latency-svc-n4mpc May 6 23:58:29.053: INFO: Got endpoints: latency-svc-n4mpc [1.652991328s] May 6 23:58:29.054: INFO: Created: latency-svc-czr8t May 6 23:58:29.337: INFO: Created: latency-svc-zqp2r May 6 23:58:29.337: INFO: Got endpoints: latency-svc-czr8t [1.858587928s] May 6 23:58:29.579: INFO: Got endpoints: latency-svc-zqp2r [1.968493716s] May 6 23:58:29.627: INFO: Created: latency-svc-mvrc7 May 6 23:58:29.673: INFO: Got endpoints: latency-svc-mvrc7 [2.033918645s] May 6 23:58:29.761: INFO: Created: latency-svc-b944p May 6 23:58:29.779: INFO: Got endpoints: latency-svc-b944p [2.018976806s] May 6 23:58:29.836: INFO: Created: latency-svc-lmh5f May 6 23:58:29.951: INFO: Got endpoints: latency-svc-lmh5f [2.171137149s] May 6 23:58:29.987: INFO: Created: latency-svc-2fmfx May 6 23:58:30.025: INFO: Got endpoints: latency-svc-2fmfx [2.209221228s] May 6 23:58:30.108: INFO: Created: latency-svc-mvlcq May 6 23:58:30.121: INFO: Got endpoints: latency-svc-mvlcq [2.17048154s] May 6 23:58:30.145: INFO: Created: latency-svc-fgcq8 May 6 23:58:30.164: INFO: Got endpoints: latency-svc-fgcq8 [2.151549018s] May 6 23:58:30.196: INFO: Created: latency-svc-95qlf May 6 23:58:30.304: INFO: Got endpoints: latency-svc-95qlf [2.171334115s] May 6 23:58:30.306: INFO: Created: latency-svc-65njf May 6 23:58:30.327: INFO: Got endpoints: latency-svc-65njf [2.063948413s] May 6 23:58:30.478: INFO: Created: latency-svc-mssnt May 6 23:58:30.557: INFO: Got endpoints: latency-svc-mssnt [2.26342255s] May 6 23:58:30.558: INFO: Created: latency-svc-nnmmf May 6 23:58:30.700: INFO: Got endpoints: latency-svc-nnmmf [2.263686865s] May 6 23:58:30.703: INFO: Created: latency-svc-n9kbj May 6 23:58:30.716: INFO: Got endpoints: latency-svc-n9kbj [1.878906221s] May 6 23:58:30.754: INFO: Created: latency-svc-fq6v9 May 6 23:58:30.783: INFO: Got endpoints: latency-svc-fq6v9 [1.877773896s] May 6 23:58:30.850: INFO: Created: latency-svc-hwhxp May 6 23:58:30.867: INFO: Got endpoints: latency-svc-hwhxp [1.813723888s] May 6 23:58:30.911: INFO: Created: latency-svc-bh9vp May 6 23:58:31.041: INFO: Got endpoints: latency-svc-bh9vp [1.70296576s] May 6 23:58:31.045: INFO: Created: latency-svc-vq2k4 May 6 23:58:31.089: INFO: Got endpoints: latency-svc-vq2k4 [1.509558956s] May 6 23:58:31.250: INFO: Created: latency-svc-wmk8f May 6 23:58:31.280: INFO: Got endpoints: latency-svc-wmk8f [1.607650031s] May 6 23:58:31.282: INFO: Created: latency-svc-pxflk May 6 23:58:31.412: INFO: Got endpoints: latency-svc-pxflk [1.632938098s] May 6 23:58:31.449: INFO: Created: latency-svc-b2qxv May 6 23:58:31.467: INFO: Got endpoints: latency-svc-b2qxv [1.516492709s] May 6 23:58:31.509: INFO: Created: latency-svc-qnb4h May 6 23:58:31.598: INFO: Got endpoints: latency-svc-qnb4h [1.572538003s] May 6 23:58:31.632: INFO: Created: latency-svc-nxcmr May 6 23:58:31.664: INFO: Got endpoints: latency-svc-nxcmr [1.542828637s] May 6 23:58:31.777: INFO: Created: latency-svc-jw6sh May 6 23:58:31.786: INFO: Got endpoints: latency-svc-jw6sh [1.622888049s] May 6 23:58:31.865: INFO: Created: latency-svc-5m7jv May 6 23:58:31.944: INFO: Got endpoints: latency-svc-5m7jv [1.640539112s] May 6 23:58:31.955: INFO: Created: latency-svc-vj2tp May 6 23:58:31.962: INFO: Got endpoints: latency-svc-vj2tp [1.634520288s] May 6 23:58:32.032: INFO: Created: latency-svc-s9kq6 May 6 23:58:32.155: INFO: Got endpoints: latency-svc-s9kq6 [1.598006429s] May 6 23:58:32.156: INFO: Created: latency-svc-wl8rj May 6 23:58:32.166: INFO: Got endpoints: latency-svc-wl8rj [1.465893239s] May 6 23:58:32.224: INFO: Created: latency-svc-zwqqs May 6 23:58:32.254: INFO: Got endpoints: latency-svc-zwqqs [1.53800257s] May 6 23:58:32.353: INFO: Created: latency-svc-7n655 May 6 23:58:32.364: INFO: Got endpoints: latency-svc-7n655 [1.581208783s] May 6 23:58:32.393: INFO: Created: latency-svc-f96wx May 6 23:58:32.413: INFO: Got endpoints: latency-svc-f96wx [1.546087795s] May 6 23:58:32.490: INFO: Created: latency-svc-qrj6v May 6 23:58:32.493: INFO: Got endpoints: latency-svc-qrj6v [1.452330784s] May 6 23:58:32.556: INFO: Created: latency-svc-47wl4 May 6 23:58:32.569: INFO: Got endpoints: latency-svc-47wl4 [1.480401222s] May 6 23:58:32.664: INFO: Created: latency-svc-9g6s6 May 6 23:58:32.873: INFO: Got endpoints: latency-svc-9g6s6 [1.592172526s] May 6 23:58:32.873: INFO: Created: latency-svc-kqsx2 May 6 23:58:32.877: INFO: Got endpoints: latency-svc-kqsx2 [1.46528604s] May 6 23:58:32.920: INFO: Created: latency-svc-rrqmr May 6 23:58:32.941: INFO: Got endpoints: latency-svc-rrqmr [1.473692482s] May 6 23:58:33.083: INFO: Created: latency-svc-7p8hm May 6 23:58:33.094: INFO: Got endpoints: latency-svc-7p8hm [1.495973208s] May 6 23:58:33.094: INFO: Latencies: [194.609965ms 333.915416ms 411.317562ms 546.23366ms 734.995894ms 1.041663265s 1.059154473s 1.069027918s 1.072659566s 1.077514696s 1.093698275s 1.101122993s 1.103611525s 1.11084983s 1.11541172s 1.130664091s 1.131075832s 1.13143907s 1.137669865s 1.152012729s 1.153767712s 1.155464827s 1.161871031s 1.163958041s 1.169692043s 1.171866318s 1.172936908s 1.173809368s 1.180317483s 1.184480642s 1.1910467s 1.195862841s 1.204024023s 1.209155195s 1.209437005s 1.21128614s 1.215876396s 1.217506283s 1.220367932s 1.221278269s 1.223268471s 1.224071791s 1.230129149s 1.230969596s 1.233236978s 1.233574921s 1.233597479s 1.233626439s 1.233887316s 1.233895928s 1.237385628s 1.238596931s 1.239316309s 1.239527825s 1.239753054s 1.244094874s 1.245026108s 1.24553962s 1.255624613s 1.260128398s 1.262663164s 1.262888946s 1.26301903s 1.271900944s 1.272030549s 1.272272082s 1.283696314s 1.288357372s 1.289480261s 1.293774741s 1.294087719s 1.299628758s 1.299838752s 1.302953446s 1.306444269s 1.306685083s 1.308800243s 1.312635771s 1.315222194s 1.316341256s 1.320483982s 1.323203559s 1.32357775s 1.325551772s 1.329777413s 1.330615382s 1.334314168s 1.334999309s 1.335465306s 1.34118051s 1.342865943s 1.364194455s 1.367542606s 1.370709161s 1.376993451s 1.385773392s 1.391372606s 1.400305218s 1.406634451s 1.413792755s 1.416722889s 1.427180641s 1.431993187s 1.43319804s 1.436084027s 1.436649049s 1.437067285s 1.450132521s 1.452330784s 1.453628809s 1.46528604s 1.465893239s 1.466012821s 1.470570882s 1.473692482s 1.480401222s 1.480776672s 1.495973208s 1.502912795s 1.505826117s 1.509558956s 1.516492709s 1.517482394s 1.521716943s 1.536680954s 1.53800257s 1.542828637s 1.54375606s 1.546087795s 1.572538003s 1.581208783s 1.581531463s 1.592172526s 1.598006429s 1.607650031s 1.622888049s 1.632938098s 1.634520288s 1.640539112s 1.652991328s 1.675553927s 1.70296576s 1.724961233s 1.795948975s 1.813723888s 1.851328799s 1.858587928s 1.877773896s 1.878906221s 1.885601899s 1.968493716s 2.018976806s 2.033918645s 2.063948413s 2.143938298s 2.151549018s 2.17048154s 2.171137149s 2.171334115s 2.209221228s 2.24131492s 2.26342255s 2.263686865s 2.304306324s 2.309778358s 2.316372378s 2.332657958s 2.347704088s 2.360824385s 2.374826329s 2.418745358s 2.46126737s 2.492977555s 2.528249327s 2.575797567s 2.615850644s 2.636344826s 2.696915787s 2.701323374s 2.712899054s 2.766712275s 2.799601981s 2.814123296s 2.814746306s 2.840306271s 2.878617637s 2.933988688s 3.062120561s 3.079446575s 3.082638686s 3.088558645s 3.119495611s 3.151747389s 3.162938222s 3.165452671s 3.259511005s 3.581569131s 3.601509864s 3.609625877s 3.748154573s] May 6 23:58:33.094: INFO: 50 %ile: 1.416722889s May 6 23:58:33.094: INFO: 90 %ile: 2.766712275s May 6 23:58:33.094: INFO: 99 %ile: 3.609625877s May 6 23:58:33.094: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 23:58:33.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6566" for this suite. • [SLOW TEST:28.425 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":2,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 23:58:33.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 23:58:34.225: INFO: Waiting up to 5m0s for pod "pod-89eabe19-0b86-45e2-b7bc-265596b22b92" in namespace "emptydir-5563" to be "Succeeded or Failed" May 6 23:58:34.235: INFO: Pod "pod-89eabe19-0b86-45e2-b7bc-265596b22b92": Phase="Pending", Reason="", readiness=false. Elapsed: 9.955775ms May 6 23:58:36.239: INFO: Pod "pod-89eabe19-0b86-45e2-b7bc-265596b22b92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014164318s May 6 23:58:38.243: INFO: Pod "pod-89eabe19-0b86-45e2-b7bc-265596b22b92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018605586s May 6 23:58:40.574: INFO: Pod "pod-89eabe19-0b86-45e2-b7bc-265596b22b92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349031265s May 6 23:58:42.658: INFO: Pod "pod-89eabe19-0b86-45e2-b7bc-265596b22b92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.433516813s STEP: Saw pod success May 6 23:58:42.658: INFO: Pod "pod-89eabe19-0b86-45e2-b7bc-265596b22b92" satisfied condition "Succeeded or Failed" May 6 23:58:42.843: INFO: Trying to get logs from node latest-worker2 pod pod-89eabe19-0b86-45e2-b7bc-265596b22b92 container test-container: STEP: delete the pod May 6 23:58:43.573: INFO: Waiting for pod pod-89eabe19-0b86-45e2-b7bc-265596b22b92 to disappear May 6 23:58:43.629: INFO: Pod pod-89eabe19-0b86-45e2-b7bc-265596b22b92 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 23:58:43.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5563" for this suite. • [SLOW TEST:10.437 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":3,"skipped":37,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 23:58:43.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4788bbdf-cfb3-4913-809d-b85eb63d2967 STEP: Creating a pod to test consume secrets May 6 23:58:44.065: INFO: Waiting up to 5m0s for pod "pod-secrets-3efeda06-1c82-4c14-b271-ce80d1067771" in namespace "secrets-33" to be "Succeeded or Failed" May 6 23:58:44.109: INFO: Pod "pod-secrets-3efeda06-1c82-4c14-b271-ce80d1067771": Phase="Pending", Reason="", readiness=false. Elapsed: 44.145135ms May 6 23:58:46.497: INFO: Pod "pod-secrets-3efeda06-1c82-4c14-b271-ce80d1067771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432362857s May 6 23:58:48.525: INFO: Pod "pod-secrets-3efeda06-1c82-4c14-b271-ce80d1067771": Phase="Running", Reason="", readiness=true. Elapsed: 4.459918355s May 6 23:58:50.562: INFO: Pod "pod-secrets-3efeda06-1c82-4c14-b271-ce80d1067771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.496696065s STEP: Saw pod success May 6 23:58:50.562: INFO: Pod "pod-secrets-3efeda06-1c82-4c14-b271-ce80d1067771" satisfied condition "Succeeded or Failed" May 6 23:58:50.593: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-3efeda06-1c82-4c14-b271-ce80d1067771 container secret-env-test: STEP: delete the pod May 6 23:58:50.731: INFO: Waiting for pod pod-secrets-3efeda06-1c82-4c14-b271-ce80d1067771 to disappear May 6 23:58:50.812: INFO: Pod pod-secrets-3efeda06-1c82-4c14-b271-ce80d1067771 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 23:58:50.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-33" for this suite. • [SLOW TEST:7.128 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":4,"skipped":41,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 23:58:50.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:00:52.340: INFO: Deleting pod "var-expansion-e263e368-694b-41d0-ba3f-ebb0866a19b4" in namespace "var-expansion-5884" May 7 00:00:52.364: INFO: Wait up to 5m0s for pod "var-expansion-e263e368-694b-41d0-ba3f-ebb0866a19b4" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:00:56.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5884" for this suite. • [SLOW TEST:126.095 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":5,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:00:56.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5700 STEP: creating service affinity-nodeport-transition in namespace services-5700 STEP: creating replication controller affinity-nodeport-transition in namespace services-5700 I0507 00:00:58.382827 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5700, replica count: 3 I0507 00:01:01.433310 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:01:04.433529 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:01:07.433737 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:01:10.433944 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 00:01:10.447: INFO: Creating new exec pod May 7 00:01:17.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5700 execpod-affinity6tj5d -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 7 00:01:20.287: INFO: stderr: "I0507 00:01:20.186555 31 log.go:172] (0xc000c60000) (0xc0006e8dc0) Create stream\nI0507 00:01:20.186640 31 log.go:172] (0xc000c60000) (0xc0006e8dc0) Stream added, broadcasting: 1\nI0507 00:01:20.188470 31 log.go:172] (0xc000c60000) Reply frame received for 1\nI0507 00:01:20.188510 31 log.go:172] (0xc000c60000) (0xc0006e9d60) Create stream\nI0507 00:01:20.188523 31 log.go:172] (0xc000c60000) (0xc0006e9d60) Stream added, broadcasting: 3\nI0507 00:01:20.189716 31 log.go:172] (0xc000c60000) Reply frame received for 3\nI0507 00:01:20.189773 31 log.go:172] (0xc000c60000) (0xc0006de640) Create stream\nI0507 00:01:20.189788 31 log.go:172] (0xc000c60000) (0xc0006de640) Stream added, broadcasting: 5\nI0507 00:01:20.190615 31 log.go:172] (0xc000c60000) Reply frame received for 5\nI0507 00:01:20.279873 31 log.go:172] (0xc000c60000) Data frame received for 5\nI0507 00:01:20.279911 31 log.go:172] (0xc0006de640) (5) Data frame handling\nI0507 00:01:20.279940 31 log.go:172] (0xc0006de640) (5) Data frame sent\nI0507 00:01:20.279955 31 log.go:172] (0xc000c60000) Data frame received for 5\nI0507 00:01:20.279969 31 log.go:172] (0xc0006de640) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0507 00:01:20.280018 31 log.go:172] (0xc0006de640) (5) Data frame sent\nI0507 00:01:20.280167 31 log.go:172] (0xc000c60000) Data frame received for 3\nI0507 00:01:20.280186 31 log.go:172] (0xc0006e9d60) (3) Data frame handling\nI0507 00:01:20.280236 31 log.go:172] (0xc000c60000) Data frame received for 5\nI0507 00:01:20.280265 31 log.go:172] (0xc0006de640) (5) Data frame handling\nI0507 00:01:20.282154 31 log.go:172] (0xc000c60000) Data frame received for 1\nI0507 00:01:20.282188 31 log.go:172] (0xc0006e8dc0) (1) Data frame handling\nI0507 00:01:20.282211 31 log.go:172] (0xc0006e8dc0) (1) Data frame sent\nI0507 00:01:20.282231 31 log.go:172] (0xc000c60000) (0xc0006e8dc0) Stream removed, broadcasting: 1\nI0507 00:01:20.282253 31 log.go:172] (0xc000c60000) Go away received\nI0507 00:01:20.282567 31 log.go:172] (0xc000c60000) (0xc0006e8dc0) Stream removed, broadcasting: 1\nI0507 00:01:20.282588 31 log.go:172] (0xc000c60000) (0xc0006e9d60) Stream removed, broadcasting: 3\nI0507 00:01:20.282600 31 log.go:172] (0xc000c60000) (0xc0006de640) Stream removed, broadcasting: 5\n" May 7 00:01:20.287: INFO: stdout: "" May 7 00:01:20.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5700 execpod-affinity6tj5d -- /bin/sh -x -c nc -zv -t -w 2 10.108.137.69 80' May 7 00:01:20.495: INFO: stderr: "I0507 00:01:20.424250 64 log.go:172] (0xc0009b53f0) (0xc000708500) Create stream\nI0507 00:01:20.424292 64 log.go:172] (0xc0009b53f0) (0xc000708500) Stream added, broadcasting: 1\nI0507 00:01:20.430043 64 log.go:172] (0xc0009b53f0) Reply frame received for 1\nI0507 00:01:20.430109 64 log.go:172] (0xc0009b53f0) (0xc0006f5400) Create stream\nI0507 00:01:20.430134 64 log.go:172] (0xc0009b53f0) (0xc0006f5400) Stream added, broadcasting: 3\nI0507 00:01:20.430923 64 log.go:172] (0xc0009b53f0) Reply frame received for 3\nI0507 00:01:20.430957 64 log.go:172] (0xc0009b53f0) (0xc000584460) Create stream\nI0507 00:01:20.430968 64 log.go:172] (0xc0009b53f0) (0xc000584460) Stream added, broadcasting: 5\nI0507 00:01:20.431766 64 log.go:172] (0xc0009b53f0) Reply frame received for 5\nI0507 00:01:20.488744 64 log.go:172] (0xc0009b53f0) Data frame received for 5\nI0507 00:01:20.488770 64 log.go:172] (0xc000584460) (5) Data frame handling\nI0507 00:01:20.488781 64 log.go:172] (0xc000584460) (5) Data frame sent\nI0507 00:01:20.488789 64 log.go:172] (0xc0009b53f0) Data frame received for 5\nI0507 00:01:20.488796 64 log.go:172] (0xc000584460) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.137.69 80\nConnection to 10.108.137.69 80 port [tcp/http] succeeded!\nI0507 00:01:20.488840 64 log.go:172] (0xc0009b53f0) Data frame received for 3\nI0507 00:01:20.488860 64 log.go:172] (0xc0006f5400) (3) Data frame handling\nI0507 00:01:20.490630 64 log.go:172] (0xc0009b53f0) Data frame received for 1\nI0507 00:01:20.490663 64 log.go:172] (0xc000708500) (1) Data frame handling\nI0507 00:01:20.490681 64 log.go:172] (0xc000708500) (1) Data frame sent\nI0507 00:01:20.490692 64 log.go:172] (0xc0009b53f0) (0xc000708500) Stream removed, broadcasting: 1\nI0507 00:01:20.490868 64 log.go:172] (0xc0009b53f0) Go away received\nI0507 00:01:20.490985 64 log.go:172] (0xc0009b53f0) (0xc000708500) Stream removed, broadcasting: 1\nI0507 00:01:20.491006 64 log.go:172] (0xc0009b53f0) (0xc0006f5400) Stream removed, broadcasting: 3\nI0507 00:01:20.491017 64 log.go:172] (0xc0009b53f0) (0xc000584460) Stream removed, broadcasting: 5\n" May 7 00:01:20.495: INFO: stdout: "" May 7 00:01:20.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5700 execpod-affinity6tj5d -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30973' May 7 00:01:20.713: INFO: stderr: "I0507 00:01:20.626083 84 log.go:172] (0xc0009c2e70) (0xc000a34460) Create stream\nI0507 00:01:20.626161 84 log.go:172] (0xc0009c2e70) (0xc000a34460) Stream added, broadcasting: 1\nI0507 00:01:20.631244 84 log.go:172] (0xc0009c2e70) Reply frame received for 1\nI0507 00:01:20.631280 84 log.go:172] (0xc0009c2e70) (0xc0006eee60) Create stream\nI0507 00:01:20.631288 84 log.go:172] (0xc0009c2e70) (0xc0006eee60) Stream added, broadcasting: 3\nI0507 00:01:20.632277 84 log.go:172] (0xc0009c2e70) Reply frame received for 3\nI0507 00:01:20.632333 84 log.go:172] (0xc0009c2e70) (0xc0006aa500) Create stream\nI0507 00:01:20.632353 84 log.go:172] (0xc0009c2e70) (0xc0006aa500) Stream added, broadcasting: 5\nI0507 00:01:20.633609 84 log.go:172] (0xc0009c2e70) Reply frame received for 5\nI0507 00:01:20.706479 84 log.go:172] (0xc0009c2e70) Data frame received for 3\nI0507 00:01:20.706538 84 log.go:172] (0xc0006eee60) (3) Data frame handling\nI0507 00:01:20.706567 84 log.go:172] (0xc0009c2e70) Data frame received for 5\nI0507 00:01:20.706579 84 log.go:172] (0xc0006aa500) (5) Data frame handling\nI0507 00:01:20.706594 84 log.go:172] (0xc0006aa500) (5) Data frame sent\nI0507 00:01:20.706606 84 log.go:172] (0xc0009c2e70) Data frame received for 5\nI0507 00:01:20.706616 84 log.go:172] (0xc0006aa500) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30973\nConnection to 172.17.0.13 30973 port [tcp/30973] succeeded!\nI0507 00:01:20.708587 84 log.go:172] (0xc0009c2e70) Data frame received for 1\nI0507 00:01:20.708614 84 log.go:172] (0xc000a34460) (1) Data frame handling\nI0507 00:01:20.708636 84 log.go:172] (0xc000a34460) (1) Data frame sent\nI0507 00:01:20.708664 84 log.go:172] (0xc0009c2e70) (0xc000a34460) Stream removed, broadcasting: 1\nI0507 00:01:20.708686 84 log.go:172] (0xc0009c2e70) Go away received\nI0507 00:01:20.708986 84 log.go:172] (0xc0009c2e70) (0xc000a34460) Stream removed, broadcasting: 1\nI0507 00:01:20.709007 84 log.go:172] (0xc0009c2e70) (0xc0006eee60) Stream removed, broadcasting: 3\nI0507 00:01:20.709015 84 log.go:172] (0xc0009c2e70) (0xc0006aa500) Stream removed, broadcasting: 5\n" May 7 00:01:20.713: INFO: stdout: "" May 7 00:01:20.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5700 execpod-affinity6tj5d -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30973' May 7 00:01:20.906: INFO: stderr: "I0507 00:01:20.844252 103 log.go:172] (0xc000b2ac60) (0xc00034e460) Create stream\nI0507 00:01:20.844299 103 log.go:172] (0xc000b2ac60) (0xc00034e460) Stream added, broadcasting: 1\nI0507 00:01:20.846794 103 log.go:172] (0xc000b2ac60) Reply frame received for 1\nI0507 00:01:20.846846 103 log.go:172] (0xc000b2ac60) (0xc000b1c000) Create stream\nI0507 00:01:20.846859 103 log.go:172] (0xc000b2ac60) (0xc000b1c000) Stream added, broadcasting: 3\nI0507 00:01:20.847713 103 log.go:172] (0xc000b2ac60) Reply frame received for 3\nI0507 00:01:20.847747 103 log.go:172] (0xc000b2ac60) (0xc0009170e0) Create stream\nI0507 00:01:20.847763 103 log.go:172] (0xc000b2ac60) (0xc0009170e0) Stream added, broadcasting: 5\nI0507 00:01:20.848448 103 log.go:172] (0xc000b2ac60) Reply frame received for 5\nI0507 00:01:20.898860 103 log.go:172] (0xc000b2ac60) Data frame received for 5\nI0507 00:01:20.898898 103 log.go:172] (0xc0009170e0) (5) Data frame handling\nI0507 00:01:20.898907 103 log.go:172] (0xc0009170e0) (5) Data frame sent\nI0507 00:01:20.898913 103 log.go:172] (0xc000b2ac60) Data frame received for 5\nI0507 00:01:20.898920 103 log.go:172] (0xc0009170e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30973\nConnection to 172.17.0.12 30973 port [tcp/30973] succeeded!\nI0507 00:01:20.898938 103 log.go:172] (0xc000b2ac60) Data frame received for 3\nI0507 00:01:20.898944 103 log.go:172] (0xc000b1c000) (3) Data frame handling\nI0507 00:01:20.900630 103 log.go:172] (0xc000b2ac60) Data frame received for 1\nI0507 00:01:20.900643 103 log.go:172] (0xc00034e460) (1) Data frame handling\nI0507 00:01:20.900650 103 log.go:172] (0xc00034e460) (1) Data frame sent\nI0507 00:01:20.900661 103 log.go:172] (0xc000b2ac60) (0xc00034e460) Stream removed, broadcasting: 1\nI0507 00:01:20.900675 103 log.go:172] (0xc000b2ac60) Go away received\nI0507 00:01:20.900981 103 log.go:172] (0xc000b2ac60) (0xc00034e460) Stream removed, broadcasting: 1\nI0507 00:01:20.901000 103 log.go:172] (0xc000b2ac60) (0xc000b1c000) Stream removed, broadcasting: 3\nI0507 00:01:20.901009 103 log.go:172] (0xc000b2ac60) (0xc0009170e0) Stream removed, broadcasting: 5\n" May 7 00:01:20.906: INFO: stdout: "" May 7 00:01:20.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5700 execpod-affinity6tj5d -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30973/ ; done' May 7 00:01:21.217: INFO: stderr: "I0507 00:01:21.047433 122 log.go:172] (0xc0000e88f0) (0xc00050ef00) Create stream\nI0507 00:01:21.047503 122 log.go:172] (0xc0000e88f0) (0xc00050ef00) Stream added, broadcasting: 1\nI0507 00:01:21.050251 122 log.go:172] (0xc0000e88f0) Reply frame received for 1\nI0507 00:01:21.050285 122 log.go:172] (0xc0000e88f0) (0xc0000f3f40) Create stream\nI0507 00:01:21.050292 122 log.go:172] (0xc0000e88f0) (0xc0000f3f40) Stream added, broadcasting: 3\nI0507 00:01:21.051299 122 log.go:172] (0xc0000e88f0) Reply frame received for 3\nI0507 00:01:21.051342 122 log.go:172] (0xc0000e88f0) (0xc000430320) Create stream\nI0507 00:01:21.051354 122 log.go:172] (0xc0000e88f0) (0xc000430320) Stream added, broadcasting: 5\nI0507 00:01:21.052259 122 log.go:172] (0xc0000e88f0) Reply frame received for 5\nI0507 00:01:21.110164 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.110205 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.110220 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.110243 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.110254 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.110265 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.116126 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.116153 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.116171 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.116725 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.116741 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.116750 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.116804 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.116836 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.116871 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.124946 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.124968 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.125007 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.125690 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.125705 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.125716 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0507 00:01:21.125775 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.125793 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.125804 122 log.go:172] (0xc000430320) (5) Data frame sent\n 2 http://172.17.0.13:30973/\nI0507 00:01:21.125936 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.125964 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.125986 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.131695 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.131713 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.131724 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.132109 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.132122 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.132130 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.132152 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.132176 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.132197 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.137369 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.137396 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.137423 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.138029 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.138065 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.138081 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.138100 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.138111 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.138123 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.143047 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.143090 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.143159 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.143549 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.143560 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.143567 122 log.go:172] (0xc000430320) (5) Data frame sent\nI0507 00:01:21.143575 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.143581 122 log.go:172] (0xc000430320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.143598 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.143631 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.143645 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.143667 122 log.go:172] (0xc000430320) (5) Data frame sent\nI0507 00:01:21.151345 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.151356 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.151362 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.151742 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.151752 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.151758 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\nI0507 00:01:21.151884 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.151907 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.151921 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.151938 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.151949 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.151962 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.155982 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.155999 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.156007 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.156546 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.156580 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.156669 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.156877 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.156911 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.156945 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.161959 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.162002 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.162030 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.162495 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.162532 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.162551 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ I0507 00:01:21.162875 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.162898 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.162912 122 log.go:172] (0xc000430320) (5) Data frame sent\ncurl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.162960 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.162982 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.162994 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.166885 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.166899 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.166906 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.167522 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.167535 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.167542 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.167566 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.167583 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.167596 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.174222 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.174244 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.174267 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.174855 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.174878 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.174893 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0507 00:01:21.174978 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.175000 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.175035 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.175050 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.175074 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.175095 122 log.go:172] (0xc000430320) (5) Data frame sent\n http://172.17.0.13:30973/\nI0507 00:01:21.180190 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.180218 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.180246 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.180796 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.180840 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.180857 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.180878 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.180891 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.180912 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.184417 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.184452 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.184481 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.185059 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.185085 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.185096 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.185356 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.185376 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.185392 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.190988 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.191018 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.191057 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.191997 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.192020 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.192039 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.192325 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.192345 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.192361 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.196591 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.196699 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.196741 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.197306 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.197328 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.197343 122 log.go:172] (0xc000430320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.197371 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.197413 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.197433 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.203272 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.203356 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.203434 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.203770 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.203790 122 log.go:172] (0xc000430320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.203811 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.203839 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.203865 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.203895 122 log.go:172] (0xc000430320) (5) Data frame sent\nI0507 00:01:21.208183 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.208223 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.208244 122 log.go:172] (0xc0000f3f40) (3) Data frame sent\nI0507 00:01:21.209373 122 log.go:172] (0xc0000e88f0) Data frame received for 5\nI0507 00:01:21.209494 122 log.go:172] (0xc000430320) (5) Data frame handling\nI0507 00:01:21.209587 122 log.go:172] (0xc0000e88f0) Data frame received for 3\nI0507 00:01:21.209619 122 log.go:172] (0xc0000f3f40) (3) Data frame handling\nI0507 00:01:21.211789 122 log.go:172] (0xc0000e88f0) Data frame received for 1\nI0507 00:01:21.211809 122 log.go:172] (0xc00050ef00) (1) Data frame handling\nI0507 00:01:21.211822 122 log.go:172] (0xc00050ef00) (1) Data frame sent\nI0507 00:01:21.211835 122 log.go:172] (0xc0000e88f0) (0xc00050ef00) Stream removed, broadcasting: 1\nI0507 00:01:21.212213 122 log.go:172] (0xc0000e88f0) (0xc00050ef00) Stream removed, broadcasting: 1\nI0507 00:01:21.212236 122 log.go:172] (0xc0000e88f0) (0xc0000f3f40) Stream removed, broadcasting: 3\nI0507 00:01:21.212411 122 log.go:172] (0xc0000e88f0) (0xc000430320) Stream removed, broadcasting: 5\n" May 7 00:01:21.218: INFO: stdout: "\naffinity-nodeport-transition-fdhkh\naffinity-nodeport-transition-fdhkh\naffinity-nodeport-transition-fdhkh\naffinity-nodeport-transition-fdhkh\naffinity-nodeport-transition-vl8dl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-vl8dl\naffinity-nodeport-transition-fdhkh\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-fdhkh\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-fdhkh\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-vl8dl" May 7 00:01:21.218: INFO: Received response from host: May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-fdhkh May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-fdhkh May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-fdhkh May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-fdhkh May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-vl8dl May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-vl8dl May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-fdhkh May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-fdhkh May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-fdhkh May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.218: INFO: Received response from host: affinity-nodeport-transition-vl8dl May 7 00:01:21.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5700 execpod-affinity6tj5d -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30973/ ; done' May 7 00:01:21.543: INFO: stderr: "I0507 00:01:21.380382 143 log.go:172] (0xc000b56790) (0xc00041f680) Create stream\nI0507 00:01:21.380463 143 log.go:172] (0xc000b56790) (0xc00041f680) Stream added, broadcasting: 1\nI0507 00:01:21.386995 143 log.go:172] (0xc000b56790) Reply frame received for 1\nI0507 00:01:21.387023 143 log.go:172] (0xc000b56790) (0xc0007265a0) Create stream\nI0507 00:01:21.387030 143 log.go:172] (0xc000b56790) (0xc0007265a0) Stream added, broadcasting: 3\nI0507 00:01:21.387848 143 log.go:172] (0xc000b56790) Reply frame received for 3\nI0507 00:01:21.387871 143 log.go:172] (0xc000b56790) (0xc000726f00) Create stream\nI0507 00:01:21.387877 143 log.go:172] (0xc000b56790) (0xc000726f00) Stream added, broadcasting: 5\nI0507 00:01:21.388598 143 log.go:172] (0xc000b56790) Reply frame received for 5\nI0507 00:01:21.445638 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.445680 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.445694 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.445713 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.445726 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.445742 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.448852 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.448889 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.448929 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.449347 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.449390 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.449414 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.449441 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.449461 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.449489 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.452923 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.452939 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.452957 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.453412 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.453444 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.453472 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.453493 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.453505 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.453522 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.459424 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.459466 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.459491 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.459822 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.459841 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.459865 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.459878 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.459888 143 log.go:172] (0xc000726f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.459922 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.459991 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.460008 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.460027 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.466285 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.466306 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.466324 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.467153 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.467175 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.467184 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.467200 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.467208 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.467215 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.474606 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.474633 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.474653 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.475167 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.475192 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.475203 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.475216 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.475237 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.475245 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.482004 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.482021 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.482038 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.482489 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.482515 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.482527 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.482542 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.482554 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.482562 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.486042 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.486083 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.486113 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.486413 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.486455 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.486475 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.486504 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.486515 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.486540 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.486563 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.486578 143 log.go:172] (0xc000726f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.486602 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.490334 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.490362 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.490398 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.490868 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.490899 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.490910 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.490930 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.490950 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.490988 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.491004 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.491015 143 log.go:172] (0xc000726f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.491034 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.495030 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.495063 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.495092 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.495452 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.495507 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.495539 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.495569 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.495590 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.495624 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.500305 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.500348 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.500379 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.500745 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.500774 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.500817 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.500835 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.500866 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.500901 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.500914 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.500935 143 log.go:172] (0xc000726f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.500955 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.505515 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.505548 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.505578 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.505617 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.505640 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.505677 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\nI0507 00:01:21.505806 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.505825 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.505844 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.505875 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.505894 143 log.go:172] (0xc0007265a0) (3) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.505910 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.508837 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.508870 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.508892 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.509720 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.509744 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.509759 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.509781 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.509798 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.509812 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.509821 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.509828 143 log.go:172] (0xc000726f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.509844 143 log.go:172] (0xc000726f00) (5) Data frame sent\nI0507 00:01:21.514659 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.514675 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.514692 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.515252 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.515270 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.515284 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.515444 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.515466 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.515492 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.521839 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.521864 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.521879 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.522262 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.522275 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.522283 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.522296 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.522303 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.522312 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.526424 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.526456 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.526476 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.526865 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.526894 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.526906 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.526917 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.526927 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.526936 143 log.go:172] (0xc000726f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30973/\nI0507 00:01:21.533353 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.533380 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.533406 143 log.go:172] (0xc0007265a0) (3) Data frame sent\nI0507 00:01:21.533426 143 log.go:172] (0xc000b56790) Data frame received for 3\nI0507 00:01:21.533449 143 log.go:172] (0xc0007265a0) (3) Data frame handling\nI0507 00:01:21.533471 143 log.go:172] (0xc000b56790) Data frame received for 5\nI0507 00:01:21.533486 143 log.go:172] (0xc000726f00) (5) Data frame handling\nI0507 00:01:21.535046 143 log.go:172] (0xc000b56790) Data frame received for 1\nI0507 00:01:21.535080 143 log.go:172] (0xc00041f680) (1) Data frame handling\nI0507 00:01:21.535102 143 log.go:172] (0xc00041f680) (1) Data frame sent\nI0507 00:01:21.535130 143 log.go:172] (0xc000b56790) (0xc00041f680) Stream removed, broadcasting: 1\nI0507 00:01:21.535158 143 log.go:172] (0xc000b56790) Go away received\nI0507 00:01:21.535626 143 log.go:172] (0xc000b56790) (0xc00041f680) Stream removed, broadcasting: 1\nI0507 00:01:21.535652 143 log.go:172] (0xc000b56790) (0xc0007265a0) Stream removed, broadcasting: 3\nI0507 00:01:21.535664 143 log.go:172] (0xc000b56790) (0xc000726f00) Stream removed, broadcasting: 5\n" May 7 00:01:21.544: INFO: stdout: "\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl\naffinity-nodeport-transition-xxcfl" May 7 00:01:21.544: INFO: Received response from host: May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Received response from host: affinity-nodeport-transition-xxcfl May 7 00:01:21.544: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5700, will wait for the garbage collector to delete the pods May 7 00:01:21.661: INFO: Deleting ReplicationController affinity-nodeport-transition took: 13.151729ms May 7 00:01:22.362: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 700.320511ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:01:35.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5700" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:39.140 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":6,"skipped":83,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:01:36.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 7 00:01:41.077: INFO: &Pod{ObjectMeta:{send-events-75c57a53-ef05-4b30-81c4-e4d35768da3f events-2241 /api/v1/namespaces/events-2241/pods/send-events-75c57a53-ef05-4b30-81c4-e4d35768da3f ef4c7de3-29a6-4dbe-8693-eba87de90eb2 2160025 0 2020-05-07 00:01:36 +0000 UTC map[name:foo time:550600403] map[] [] [] [{e2e.test Update v1 2020-05-07 00:01:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:01:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srj4t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srj4t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srj4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:01:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:01:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:01:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:01:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.10,StartTime:2020-05-07 00:01:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:01:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://358e8975e110219a59aa603d362c96ad70297fa92c639ca1575988d0941b55fc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 7 00:01:43.081: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 7 00:01:45.086: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:01:45.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2241" for this suite. • [SLOW TEST:9.059 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":7,"skipped":86,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:01:45.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3900 May 7 00:01:49.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3900 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 7 00:01:49.483: INFO: stderr: "I0507 00:01:49.374838 164 log.go:172] (0xc000406370) (0xc00011d680) Create stream\nI0507 00:01:49.374883 164 log.go:172] (0xc000406370) (0xc00011d680) Stream added, broadcasting: 1\nI0507 00:01:49.377478 164 log.go:172] (0xc000406370) Reply frame received for 1\nI0507 00:01:49.377527 164 log.go:172] (0xc000406370) (0xc000901400) Create stream\nI0507 00:01:49.377542 164 log.go:172] (0xc000406370) (0xc000901400) Stream added, broadcasting: 3\nI0507 00:01:49.378725 164 log.go:172] (0xc000406370) Reply frame received for 3\nI0507 00:01:49.378752 164 log.go:172] (0xc000406370) (0xc0009019a0) Create stream\nI0507 00:01:49.378762 164 log.go:172] (0xc000406370) (0xc0009019a0) Stream added, broadcasting: 5\nI0507 00:01:49.379737 164 log.go:172] (0xc000406370) Reply frame received for 5\nI0507 00:01:49.472531 164 log.go:172] (0xc000406370) Data frame received for 5\nI0507 00:01:49.472558 164 log.go:172] (0xc0009019a0) (5) Data frame handling\nI0507 00:01:49.472580 164 log.go:172] (0xc0009019a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0507 00:01:49.475884 164 log.go:172] (0xc000406370) Data frame received for 3\nI0507 00:01:49.475912 164 log.go:172] (0xc000901400) (3) Data frame handling\nI0507 00:01:49.475938 164 log.go:172] (0xc000901400) (3) Data frame sent\nI0507 00:01:49.476181 164 log.go:172] (0xc000406370) Data frame received for 3\nI0507 00:01:49.476219 164 log.go:172] (0xc000901400) (3) Data frame handling\nI0507 00:01:49.476242 164 log.go:172] (0xc000406370) Data frame received for 5\nI0507 00:01:49.476259 164 log.go:172] (0xc0009019a0) (5) Data frame handling\nI0507 00:01:49.478251 164 log.go:172] (0xc000406370) Data frame received for 1\nI0507 00:01:49.478277 164 log.go:172] (0xc00011d680) (1) Data frame handling\nI0507 00:01:49.478292 164 log.go:172] (0xc00011d680) (1) Data frame sent\nI0507 00:01:49.478302 164 log.go:172] (0xc000406370) (0xc00011d680) Stream removed, broadcasting: 1\nI0507 00:01:49.478310 164 log.go:172] (0xc000406370) Go away received\nI0507 00:01:49.478544 164 log.go:172] (0xc000406370) (0xc00011d680) Stream removed, broadcasting: 1\nI0507 00:01:49.478560 164 log.go:172] (0xc000406370) (0xc000901400) Stream removed, broadcasting: 3\nI0507 00:01:49.478567 164 log.go:172] (0xc000406370) (0xc0009019a0) Stream removed, broadcasting: 5\n" May 7 00:01:49.483: INFO: stdout: "iptables" May 7 00:01:49.483: INFO: proxyMode: iptables May 7 00:01:49.488: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 00:01:49.502: INFO: Pod kube-proxy-mode-detector still exists May 7 00:01:51.503: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 00:01:51.507: INFO: Pod kube-proxy-mode-detector still exists May 7 00:01:53.503: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 00:01:53.507: INFO: Pod kube-proxy-mode-detector still exists May 7 00:01:55.503: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 00:01:55.527: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3900 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3900 I0507 00:01:55.619768 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3900, replica count: 3 I0507 00:01:58.670262 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:02:01.670488 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:02:04.670767 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 00:02:04.676: INFO: Creating new exec pod May 7 00:02:09.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3900 execpod-affinity5sp8s -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 7 00:02:09.962: INFO: stderr: "I0507 00:02:09.882731 184 log.go:172] (0xc000b938c0) (0xc00067ce60) Create stream\nI0507 00:02:09.882782 184 log.go:172] (0xc000b938c0) (0xc00067ce60) Stream added, broadcasting: 1\nI0507 00:02:09.886703 184 log.go:172] (0xc000b938c0) Reply frame received for 1\nI0507 00:02:09.886734 184 log.go:172] (0xc000b938c0) (0xc0006754a0) Create stream\nI0507 00:02:09.886742 184 log.go:172] (0xc000b938c0) (0xc0006754a0) Stream added, broadcasting: 3\nI0507 00:02:09.887600 184 log.go:172] (0xc000b938c0) Reply frame received for 3\nI0507 00:02:09.887627 184 log.go:172] (0xc000b938c0) (0xc000624a00) Create stream\nI0507 00:02:09.887635 184 log.go:172] (0xc000b938c0) (0xc000624a00) Stream added, broadcasting: 5\nI0507 00:02:09.888462 184 log.go:172] (0xc000b938c0) Reply frame received for 5\nI0507 00:02:09.953531 184 log.go:172] (0xc000b938c0) Data frame received for 5\nI0507 00:02:09.953565 184 log.go:172] (0xc000624a00) (5) Data frame handling\nI0507 00:02:09.953595 184 log.go:172] (0xc000624a00) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0507 00:02:09.955601 184 log.go:172] (0xc000b938c0) Data frame received for 5\nI0507 00:02:09.955627 184 log.go:172] (0xc000624a00) (5) Data frame handling\nI0507 00:02:09.955653 184 log.go:172] (0xc000624a00) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0507 00:02:09.955846 184 log.go:172] (0xc000b938c0) Data frame received for 5\nI0507 00:02:09.955871 184 log.go:172] (0xc000624a00) (5) Data frame handling\nI0507 00:02:09.956136 184 log.go:172] (0xc000b938c0) Data frame received for 3\nI0507 00:02:09.956175 184 log.go:172] (0xc0006754a0) (3) Data frame handling\nI0507 00:02:09.958072 184 log.go:172] (0xc000b938c0) Data frame received for 1\nI0507 00:02:09.958095 184 log.go:172] (0xc00067ce60) (1) Data frame handling\nI0507 00:02:09.958106 184 log.go:172] (0xc00067ce60) (1) Data frame sent\nI0507 00:02:09.958117 184 log.go:172] (0xc000b938c0) (0xc00067ce60) Stream removed, broadcasting: 1\nI0507 00:02:09.958131 184 log.go:172] (0xc000b938c0) Go away received\nI0507 00:02:09.958509 184 log.go:172] (0xc000b938c0) (0xc00067ce60) Stream removed, broadcasting: 1\nI0507 00:02:09.958525 184 log.go:172] (0xc000b938c0) (0xc0006754a0) Stream removed, broadcasting: 3\nI0507 00:02:09.958533 184 log.go:172] (0xc000b938c0) (0xc000624a00) Stream removed, broadcasting: 5\n" May 7 00:02:09.962: INFO: stdout: "" May 7 00:02:09.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3900 execpod-affinity5sp8s -- /bin/sh -x -c nc -zv -t -w 2 10.101.239.106 80' May 7 00:02:10.168: INFO: stderr: "I0507 00:02:10.083045 203 log.go:172] (0xc0007f2210) (0xc000719040) Create stream\nI0507 00:02:10.083100 203 log.go:172] (0xc0007f2210) (0xc000719040) Stream added, broadcasting: 1\nI0507 00:02:10.085476 203 log.go:172] (0xc0007f2210) Reply frame received for 1\nI0507 00:02:10.085515 203 log.go:172] (0xc0007f2210) (0xc000552280) Create stream\nI0507 00:02:10.085533 203 log.go:172] (0xc0007f2210) (0xc000552280) Stream added, broadcasting: 3\nI0507 00:02:10.086448 203 log.go:172] (0xc0007f2210) Reply frame received for 3\nI0507 00:02:10.086509 203 log.go:172] (0xc0007f2210) (0xc000552a00) Create stream\nI0507 00:02:10.086539 203 log.go:172] (0xc0007f2210) (0xc000552a00) Stream added, broadcasting: 5\nI0507 00:02:10.087555 203 log.go:172] (0xc0007f2210) Reply frame received for 5\nI0507 00:02:10.162637 203 log.go:172] (0xc0007f2210) Data frame received for 5\nI0507 00:02:10.162693 203 log.go:172] (0xc0007f2210) Data frame received for 3\nI0507 00:02:10.162725 203 log.go:172] (0xc000552280) (3) Data frame handling\nI0507 00:02:10.162748 203 log.go:172] (0xc000552a00) (5) Data frame handling\nI0507 00:02:10.162788 203 log.go:172] (0xc000552a00) (5) Data frame sent\nI0507 00:02:10.162803 203 log.go:172] (0xc0007f2210) Data frame received for 5\nI0507 00:02:10.162809 203 log.go:172] (0xc000552a00) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.239.106 80\nConnection to 10.101.239.106 80 port [tcp/http] succeeded!\nI0507 00:02:10.163840 203 log.go:172] (0xc0007f2210) Data frame received for 1\nI0507 00:02:10.163863 203 log.go:172] (0xc000719040) (1) Data frame handling\nI0507 00:02:10.163876 203 log.go:172] (0xc000719040) (1) Data frame sent\nI0507 00:02:10.163922 203 log.go:172] (0xc0007f2210) (0xc000719040) Stream removed, broadcasting: 1\nI0507 00:02:10.163947 203 log.go:172] (0xc0007f2210) Go away received\nI0507 00:02:10.164261 203 log.go:172] (0xc0007f2210) (0xc000719040) Stream removed, broadcasting: 1\nI0507 00:02:10.164288 203 log.go:172] (0xc0007f2210) (0xc000552280) Stream removed, broadcasting: 3\nI0507 00:02:10.164297 203 log.go:172] (0xc0007f2210) (0xc000552a00) Stream removed, broadcasting: 5\n" May 7 00:02:10.168: INFO: stdout: "" May 7 00:02:10.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3900 execpod-affinity5sp8s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.101.239.106:80/ ; done' May 7 00:02:10.571: INFO: stderr: "I0507 00:02:10.405665 227 log.go:172] (0xc000bc6d10) (0xc0009a26e0) Create stream\nI0507 00:02:10.405725 227 log.go:172] (0xc000bc6d10) (0xc0009a26e0) Stream added, broadcasting: 1\nI0507 00:02:10.409914 227 log.go:172] (0xc000bc6d10) Reply frame received for 1\nI0507 00:02:10.409961 227 log.go:172] (0xc000bc6d10) (0xc000516c80) Create stream\nI0507 00:02:10.409985 227 log.go:172] (0xc000bc6d10) (0xc000516c80) Stream added, broadcasting: 3\nI0507 00:02:10.410983 227 log.go:172] (0xc000bc6d10) Reply frame received for 3\nI0507 00:02:10.411010 227 log.go:172] (0xc000bc6d10) (0xc000384320) Create stream\nI0507 00:02:10.411021 227 log.go:172] (0xc000bc6d10) (0xc000384320) Stream added, broadcasting: 5\nI0507 00:02:10.411975 227 log.go:172] (0xc000bc6d10) Reply frame received for 5\nI0507 00:02:10.486008 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.486042 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.486054 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.486090 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.486100 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.486115 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.488427 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.488439 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.488445 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.488880 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.488912 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.488936 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.488960 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.488973 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.488995 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.492889 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.492907 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.492916 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.493482 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.493493 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.493499 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.493520 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.493542 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.493561 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.496663 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.496683 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.496696 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.497743 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.497765 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.497774 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.497792 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.497820 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.497842 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.501238 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.501254 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.501260 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.501870 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.501898 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.501912 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.501931 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.501945 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.501957 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.505459 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.505476 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.505495 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.506156 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.506173 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.506207 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/I0507 00:02:10.506363 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.506405 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.506441 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.506477 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.506499 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.506521 227 log.go:172] (0xc000384320) (5) Data frame sent\n\nI0507 00:02:10.509687 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.509699 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.509706 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.510137 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.510150 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.510162 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.510268 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.510281 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.510293 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.516561 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.516579 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.516593 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.517071 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.517085 227 log.go:172] (0xc000384320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.517105 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.517329 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.517352 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.517377 227 log.go:172] (0xc000384320) (5) Data frame sent\nI0507 00:02:10.522915 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.522932 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.522942 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.523916 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.523954 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.523972 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.523991 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.524003 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.524021 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.527449 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.527468 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.527476 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.528230 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.528353 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.528393 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.528428 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.528455 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.528496 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.533871 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.533885 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.533901 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.534278 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.534308 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.534367 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.534395 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.534436 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.534471 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.538003 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.538015 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.538023 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.538408 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.538442 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.538454 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.538473 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.538483 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.538494 227 log.go:172] (0xc000384320) (5) Data frame sent\nI0507 00:02:10.538514 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.538525 227 log.go:172] (0xc000384320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.538545 227 log.go:172] (0xc000384320) (5) Data frame sent\nI0507 00:02:10.542423 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.542435 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.542440 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.542936 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.542952 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.542964 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.542978 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.542988 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.542997 227 log.go:172] (0xc000384320) (5) Data frame sent\nI0507 00:02:10.543007 227 log.go:172] (0xc000bc6d10) Data frame received for 5\n+ echo\n+ curl -q -sI0507 00:02:10.543016 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.543066 227 log.go:172] (0xc000384320) (5) Data frame sent\n --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.549641 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.549657 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.549665 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.550255 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.550269 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.550291 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.550331 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.550348 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.550368 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.554783 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.554796 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.554804 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.555207 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.555229 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.555240 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0507 00:02:10.555250 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.555310 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.555354 227 log.go:172] (0xc000384320) (5) Data frame sent\n http://10.101.239.106:80/\nI0507 00:02:10.555380 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.555392 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.555403 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.558693 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.558726 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.558761 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.559083 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.559172 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.559200 227 log.go:172] (0xc000384320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.559259 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.559287 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.559325 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.563257 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.563271 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.563287 227 log.go:172] (0xc000516c80) (3) Data frame sent\nI0507 00:02:10.563887 227 log.go:172] (0xc000bc6d10) Data frame received for 3\nI0507 00:02:10.563906 227 log.go:172] (0xc000bc6d10) Data frame received for 5\nI0507 00:02:10.563928 227 log.go:172] (0xc000384320) (5) Data frame handling\nI0507 00:02:10.563946 227 log.go:172] (0xc000516c80) (3) Data frame handling\nI0507 00:02:10.565818 227 log.go:172] (0xc000bc6d10) Data frame received for 1\nI0507 00:02:10.565837 227 log.go:172] (0xc0009a26e0) (1) Data frame handling\nI0507 00:02:10.565849 227 log.go:172] (0xc0009a26e0) (1) Data frame sent\nI0507 00:02:10.565861 227 log.go:172] (0xc000bc6d10) (0xc0009a26e0) Stream removed, broadcasting: 1\nI0507 00:02:10.565926 227 log.go:172] (0xc000bc6d10) Go away received\nI0507 00:02:10.566200 227 log.go:172] (0xc000bc6d10) (0xc0009a26e0) Stream removed, broadcasting: 1\nI0507 00:02:10.566213 227 log.go:172] (0xc000bc6d10) (0xc000516c80) Stream removed, broadcasting: 3\nI0507 00:02:10.566223 227 log.go:172] (0xc000bc6d10) (0xc000384320) Stream removed, broadcasting: 5\n" May 7 00:02:10.572: INFO: stdout: "\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk\naffinity-clusterip-timeout-7qfqk" May 7 00:02:10.572: INFO: Received response from host: May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Received response from host: affinity-clusterip-timeout-7qfqk May 7 00:02:10.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3900 execpod-affinity5sp8s -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.101.239.106:80/' May 7 00:02:10.780: INFO: stderr: "I0507 00:02:10.699572 247 log.go:172] (0xc00003a420) (0xc0003d43c0) Create stream\nI0507 00:02:10.699623 247 log.go:172] (0xc00003a420) (0xc0003d43c0) Stream added, broadcasting: 1\nI0507 00:02:10.701326 247 log.go:172] (0xc00003a420) Reply frame received for 1\nI0507 00:02:10.701405 247 log.go:172] (0xc00003a420) (0xc0003bcfa0) Create stream\nI0507 00:02:10.701439 247 log.go:172] (0xc00003a420) (0xc0003bcfa0) Stream added, broadcasting: 3\nI0507 00:02:10.702398 247 log.go:172] (0xc00003a420) Reply frame received for 3\nI0507 00:02:10.702438 247 log.go:172] (0xc00003a420) (0xc0003d5720) Create stream\nI0507 00:02:10.702450 247 log.go:172] (0xc00003a420) (0xc0003d5720) Stream added, broadcasting: 5\nI0507 00:02:10.703442 247 log.go:172] (0xc00003a420) Reply frame received for 5\nI0507 00:02:10.770771 247 log.go:172] (0xc00003a420) Data frame received for 5\nI0507 00:02:10.770804 247 log.go:172] (0xc0003d5720) (5) Data frame handling\nI0507 00:02:10.770824 247 log.go:172] (0xc0003d5720) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:10.772877 247 log.go:172] (0xc00003a420) Data frame received for 3\nI0507 00:02:10.772903 247 log.go:172] (0xc0003bcfa0) (3) Data frame handling\nI0507 00:02:10.772921 247 log.go:172] (0xc0003bcfa0) (3) Data frame sent\nI0507 00:02:10.773593 247 log.go:172] (0xc00003a420) Data frame received for 5\nI0507 00:02:10.773617 247 log.go:172] (0xc00003a420) Data frame received for 3\nI0507 00:02:10.773648 247 log.go:172] (0xc0003bcfa0) (3) Data frame handling\nI0507 00:02:10.773681 247 log.go:172] (0xc0003d5720) (5) Data frame handling\nI0507 00:02:10.774959 247 log.go:172] (0xc00003a420) Data frame received for 1\nI0507 00:02:10.774980 247 log.go:172] (0xc0003d43c0) (1) Data frame handling\nI0507 00:02:10.774990 247 log.go:172] (0xc0003d43c0) (1) Data frame sent\nI0507 00:02:10.775010 247 log.go:172] (0xc00003a420) (0xc0003d43c0) Stream removed, broadcasting: 1\nI0507 00:02:10.775118 247 log.go:172] (0xc00003a420) Go away received\nI0507 00:02:10.775401 247 log.go:172] (0xc00003a420) (0xc0003d43c0) Stream removed, broadcasting: 1\nI0507 00:02:10.775418 247 log.go:172] (0xc00003a420) (0xc0003bcfa0) Stream removed, broadcasting: 3\nI0507 00:02:10.775427 247 log.go:172] (0xc00003a420) (0xc0003d5720) Stream removed, broadcasting: 5\n" May 7 00:02:10.780: INFO: stdout: "affinity-clusterip-timeout-7qfqk" May 7 00:02:25.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3900 execpod-affinity5sp8s -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.101.239.106:80/' May 7 00:02:26.020: INFO: stderr: "I0507 00:02:25.923430 268 log.go:172] (0xc000938c60) (0xc0006b7b80) Create stream\nI0507 00:02:25.923519 268 log.go:172] (0xc000938c60) (0xc0006b7b80) Stream added, broadcasting: 1\nI0507 00:02:25.928740 268 log.go:172] (0xc000938c60) Reply frame received for 1\nI0507 00:02:25.928790 268 log.go:172] (0xc000938c60) (0xc0005cc500) Create stream\nI0507 00:02:25.928806 268 log.go:172] (0xc000938c60) (0xc0005cc500) Stream added, broadcasting: 3\nI0507 00:02:25.930126 268 log.go:172] (0xc000938c60) Reply frame received for 3\nI0507 00:02:25.930167 268 log.go:172] (0xc000938c60) (0xc00054e140) Create stream\nI0507 00:02:25.930175 268 log.go:172] (0xc000938c60) (0xc00054e140) Stream added, broadcasting: 5\nI0507 00:02:25.930942 268 log.go:172] (0xc000938c60) Reply frame received for 5\nI0507 00:02:26.005689 268 log.go:172] (0xc000938c60) Data frame received for 5\nI0507 00:02:26.005722 268 log.go:172] (0xc00054e140) (5) Data frame handling\nI0507 00:02:26.005745 268 log.go:172] (0xc00054e140) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.101.239.106:80/\nI0507 00:02:26.011599 268 log.go:172] (0xc000938c60) Data frame received for 3\nI0507 00:02:26.011616 268 log.go:172] (0xc0005cc500) (3) Data frame handling\nI0507 00:02:26.011634 268 log.go:172] (0xc0005cc500) (3) Data frame sent\nI0507 00:02:26.012740 268 log.go:172] (0xc000938c60) Data frame received for 5\nI0507 00:02:26.012772 268 log.go:172] (0xc00054e140) (5) Data frame handling\nI0507 00:02:26.012876 268 log.go:172] (0xc000938c60) Data frame received for 3\nI0507 00:02:26.012904 268 log.go:172] (0xc0005cc500) (3) Data frame handling\nI0507 00:02:26.014712 268 log.go:172] (0xc000938c60) Data frame received for 1\nI0507 00:02:26.014729 268 log.go:172] (0xc0006b7b80) (1) Data frame handling\nI0507 00:02:26.014744 268 log.go:172] (0xc0006b7b80) (1) Data frame sent\nI0507 00:02:26.014754 268 log.go:172] (0xc000938c60) (0xc0006b7b80) Stream removed, broadcasting: 1\nI0507 00:02:26.014812 268 log.go:172] (0xc000938c60) Go away received\nI0507 00:02:26.015068 268 log.go:172] (0xc000938c60) (0xc0006b7b80) Stream removed, broadcasting: 1\nI0507 00:02:26.015086 268 log.go:172] (0xc000938c60) (0xc0005cc500) Stream removed, broadcasting: 3\nI0507 00:02:26.015096 268 log.go:172] (0xc000938c60) (0xc00054e140) Stream removed, broadcasting: 5\n" May 7 00:02:26.020: INFO: stdout: "affinity-clusterip-timeout-bzfj2" May 7 00:02:26.020: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3900, will wait for the garbage collector to delete the pods May 7 00:02:26.123: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.910131ms May 7 00:02:26.523: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.258466ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:02:35.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3900" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:50.253 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":8,"skipped":94,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:02:35.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-791f250a-2712-4e5e-8893-3a915e6b877a STEP: Creating configMap with name cm-test-opt-upd-85b34fa1-af57-4384-862b-e6f113d50d25 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-791f250a-2712-4e5e-8893-3a915e6b877a STEP: Updating configmap cm-test-opt-upd-85b34fa1-af57-4384-862b-e6f113d50d25 STEP: Creating configMap with name cm-test-opt-create-b407fd94-bc9b-4233-bd80-c369724d4208 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:04:06.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7995" for this suite. • [SLOW TEST:90.652 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":9,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:04:06.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-x2bx STEP: Creating a pod to test atomic-volume-subpath May 7 00:04:06.230: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x2bx" in namespace "subpath-6332" to be "Succeeded or Failed" May 7 00:04:06.313: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Pending", Reason="", readiness=false. Elapsed: 83.157772ms May 7 00:04:08.319: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089097488s May 7 00:04:10.322: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 4.092059688s May 7 00:04:12.367: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 6.136979917s May 7 00:04:14.371: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 8.141047028s May 7 00:04:16.375: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 10.145249852s May 7 00:04:18.379: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 12.149637948s May 7 00:04:20.383: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 14.153136113s May 7 00:04:22.387: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 16.156922094s May 7 00:04:24.391: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 18.161378488s May 7 00:04:26.396: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 20.166104726s May 7 00:04:28.415: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Running", Reason="", readiness=true. Elapsed: 22.185081355s May 7 00:04:30.419: INFO: Pod "pod-subpath-test-configmap-x2bx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.189572622s STEP: Saw pod success May 7 00:04:30.419: INFO: Pod "pod-subpath-test-configmap-x2bx" satisfied condition "Succeeded or Failed" May 7 00:04:30.422: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-x2bx container test-container-subpath-configmap-x2bx: STEP: delete the pod May 7 00:04:30.470: INFO: Waiting for pod pod-subpath-test-configmap-x2bx to disappear May 7 00:04:30.709: INFO: Pod pod-subpath-test-configmap-x2bx no longer exists STEP: Deleting pod pod-subpath-test-configmap-x2bx May 7 00:04:30.709: INFO: Deleting pod "pod-subpath-test-configmap-x2bx" in namespace "subpath-6332" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:04:30.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6332" for this suite. • [SLOW TEST:24.709 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":10,"skipped":153,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:04:30.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:04:43.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6499" for this suite. • [SLOW TEST:13.212 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":11,"skipped":163,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:04:43.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 7 00:04:44.105: INFO: Waiting up to 1m0s for all nodes to be ready May 7 00:05:44.127: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:05:44.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 7 00:05:48.227: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:06:06.468: INFO: pods created so far: [1 1 1] May 7 00:06:06.468: INFO: length of pods created so far: 3 May 7 00:06:20.478: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:06:27.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7621" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:06:27.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4160" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:103.650 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":12,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:06:27.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:06:39.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7250" for this suite. • [SLOW TEST:11.406 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":13,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:06:39.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 7 00:06:39.111: INFO: Waiting up to 5m0s for pod "var-expansion-65ac57fc-9a0c-4e4c-adf9-eb183e45c816" in namespace "var-expansion-1522" to be "Succeeded or Failed" May 7 00:06:39.183: INFO: Pod "var-expansion-65ac57fc-9a0c-4e4c-adf9-eb183e45c816": Phase="Pending", Reason="", readiness=false. Elapsed: 72.774922ms May 7 00:06:41.187: INFO: Pod "var-expansion-65ac57fc-9a0c-4e4c-adf9-eb183e45c816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076510445s May 7 00:06:43.191: INFO: Pod "var-expansion-65ac57fc-9a0c-4e4c-adf9-eb183e45c816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080009806s STEP: Saw pod success May 7 00:06:43.191: INFO: Pod "var-expansion-65ac57fc-9a0c-4e4c-adf9-eb183e45c816" satisfied condition "Succeeded or Failed" May 7 00:06:43.193: INFO: Trying to get logs from node latest-worker2 pod var-expansion-65ac57fc-9a0c-4e4c-adf9-eb183e45c816 container dapi-container: STEP: delete the pod May 7 00:06:43.258: INFO: Waiting for pod var-expansion-65ac57fc-9a0c-4e4c-adf9-eb183e45c816 to disappear May 7 00:06:43.281: INFO: Pod var-expansion-65ac57fc-9a0c-4e4c-adf9-eb183e45c816 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:06:43.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1522" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:06:43.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5307.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5307.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5307.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5307.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 137.125.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.125.137_udp@PTR;check="$$(dig +tcp +noall +answer +search 137.125.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.125.137_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5307.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5307.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5307.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5307.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5307.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 137.125.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.125.137_udp@PTR;check="$$(dig +tcp +noall +answer +search 137.125.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.125.137_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 00:06:49.614: INFO: Unable to read wheezy_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:49.617: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:49.619: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:49.622: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:49.643: INFO: Unable to read jessie_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:49.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:49.650: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:49.654: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:49.672: INFO: Lookups using dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482 failed for: [wheezy_udp@dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_udp@dns-test-service.dns-5307.svc.cluster.local jessie_tcp@dns-test-service.dns-5307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local] May 7 00:06:54.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:54.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:54.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:54.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:54.711: INFO: Unable to read jessie_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:54.715: INFO: Unable to read jessie_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:54.719: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:54.722: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:54.736: INFO: Lookups using dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482 failed for: [wheezy_udp@dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_udp@dns-test-service.dns-5307.svc.cluster.local jessie_tcp@dns-test-service.dns-5307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local] May 7 00:06:59.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:59.682: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:59.686: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:59.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:59.717: INFO: Unable to read jessie_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:59.720: INFO: Unable to read jessie_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:59.722: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:59.724: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:06:59.736: INFO: Lookups using dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482 failed for: [wheezy_udp@dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_udp@dns-test-service.dns-5307.svc.cluster.local jessie_tcp@dns-test-service.dns-5307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local] May 7 00:07:04.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:04.682: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:04.686: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:04.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:04.714: INFO: Unable to read jessie_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:04.717: INFO: Unable to read jessie_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:04.720: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:04.723: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:04.739: INFO: Lookups using dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482 failed for: [wheezy_udp@dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_udp@dns-test-service.dns-5307.svc.cluster.local jessie_tcp@dns-test-service.dns-5307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local] May 7 00:07:09.676: INFO: Unable to read wheezy_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:09.679: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:09.682: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:09.685: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:09.707: INFO: Unable to read jessie_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:09.709: INFO: Unable to read jessie_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:09.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:09.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:09.727: INFO: Lookups using dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482 failed for: [wheezy_udp@dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_udp@dns-test-service.dns-5307.svc.cluster.local jessie_tcp@dns-test-service.dns-5307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local] May 7 00:07:14.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:14.682: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:14.686: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:14.688: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:14.747: INFO: Unable to read jessie_udp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:14.750: INFO: Unable to read jessie_tcp@dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:14.752: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:14.756: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:14.774: INFO: Lookups using dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482 failed for: [wheezy_udp@dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@dns-test-service.dns-5307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_udp@dns-test-service.dns-5307.svc.cluster.local jessie_tcp@dns-test-service.dns-5307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local] May 7 00:07:19.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local from pod dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482: the server could not find the requested resource (get pods dns-test-38ad8cac-818a-4770-a5a6-18e68271c482) May 7 00:07:19.744: INFO: Lookups using dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482 failed for: [wheezy_tcp@_http._tcp.dns-test-service.dns-5307.svc.cluster.local] May 7 00:07:25.235: INFO: DNS probes using dns-5307/dns-test-38ad8cac-818a-4770-a5a6-18e68271c482 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:07:26.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5307" for this suite. • [SLOW TEST:43.972 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":15,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:07:27.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:07:27.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7006" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":16,"skipped":277,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:07:27.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 7 00:07:28.032: INFO: namespace kubectl-8294 May 7 00:07:28.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8294' May 7 00:07:29.382: INFO: stderr: "" May 7 00:07:29.382: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 7 00:07:30.597: INFO: Selector matched 1 pods for map[app:agnhost] May 7 00:07:30.597: INFO: Found 0 / 1 May 7 00:07:31.641: INFO: Selector matched 1 pods for map[app:agnhost] May 7 00:07:31.641: INFO: Found 0 / 1 May 7 00:07:32.407: INFO: Selector matched 1 pods for map[app:agnhost] May 7 00:07:32.407: INFO: Found 0 / 1 May 7 00:07:33.831: INFO: Selector matched 1 pods for map[app:agnhost] May 7 00:07:33.831: INFO: Found 0 / 1 May 7 00:07:34.675: INFO: Selector matched 1 pods for map[app:agnhost] May 7 00:07:34.675: INFO: Found 0 / 1 May 7 00:07:35.549: INFO: Selector matched 1 pods for map[app:agnhost] May 7 00:07:35.549: INFO: Found 1 / 1 May 7 00:07:35.549: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 7 00:07:35.562: INFO: Selector matched 1 pods for map[app:agnhost] May 7 00:07:35.562: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 7 00:07:35.562: INFO: wait on agnhost-master startup in kubectl-8294 May 7 00:07:35.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-vlg8r agnhost-master --namespace=kubectl-8294' May 7 00:07:35.696: INFO: stderr: "" May 7 00:07:35.696: INFO: stdout: "Paused\n" STEP: exposing RC May 7 00:07:35.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8294' May 7 00:07:35.852: INFO: stderr: "" May 7 00:07:35.853: INFO: stdout: "service/rm2 exposed\n" May 7 00:07:35.858: INFO: Service rm2 in namespace kubectl-8294 found. STEP: exposing service May 7 00:07:37.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8294' May 7 00:07:38.130: INFO: stderr: "" May 7 00:07:38.130: INFO: stdout: "service/rm3 exposed\n" May 7 00:07:38.170: INFO: Service rm3 in namespace kubectl-8294 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:07:40.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8294" for this suite. • [SLOW TEST:12.747 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":17,"skipped":278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:07:40.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-8240 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8240 to expose endpoints map[] May 7 00:07:40.643: INFO: successfully validated that service multi-endpoint-test in namespace services-8240 exposes endpoints map[] (14.820284ms elapsed) STEP: Creating pod pod1 in namespace services-8240 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8240 to expose endpoints map[pod1:[100]] May 7 00:07:44.931: INFO: successfully validated that service multi-endpoint-test in namespace services-8240 exposes endpoints map[pod1:[100]] (4.095309048s elapsed) STEP: Creating pod pod2 in namespace services-8240 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8240 to expose endpoints map[pod1:[100] pod2:[101]] May 7 00:07:49.302: INFO: Unexpected endpoints: found map[8c6cf04c-22cc-44fb-baca-6c67cc673bc4:[100]], expected map[pod1:[100] pod2:[101]] (4.366463656s elapsed, will retry) May 7 00:07:50.311: INFO: successfully validated that service multi-endpoint-test in namespace services-8240 exposes endpoints map[pod1:[100] pod2:[101]] (5.375619167s elapsed) STEP: Deleting pod pod1 in namespace services-8240 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8240 to expose endpoints map[pod2:[101]] May 7 00:07:51.586: INFO: successfully validated that service multi-endpoint-test in namespace services-8240 exposes endpoints map[pod2:[101]] (1.270892736s elapsed) STEP: Deleting pod pod2 in namespace services-8240 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8240 to expose endpoints map[] May 7 00:07:52.782: INFO: successfully validated that service multi-endpoint-test in namespace services-8240 exposes endpoints map[] (1.191888292s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:07:53.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8240" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:13.569 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":18,"skipped":306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:07:53.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:07:54.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2184" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":19,"skipped":344,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:07:54.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:07:55.568: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"39c6ee87-a465-4a3e-a1b6-7fcebd3268bc", Controller:(*bool)(0xc002b74332), BlockOwnerDeletion:(*bool)(0xc002b74333)}} May 7 00:07:55.660: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"72183b70-ecdb-49a0-b2ea-8f3b0adae9de", Controller:(*bool)(0xc00314734a), BlockOwnerDeletion:(*bool)(0xc00314734b)}} May 7 00:07:55.880: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a7cfe791-4c10-467b-ad26-df97d2f2ea96", Controller:(*bool)(0xc002c60412), BlockOwnerDeletion:(*bool)(0xc002c60413)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:08:01.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4411" for this suite. • [SLOW TEST:6.616 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":20,"skipped":348,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:08:01.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:08:02.534: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:08:04.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:08:06.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:08:08.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406882, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:08:11.909: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:08:12.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7511" for this suite. STEP: Destroying namespace "webhook-7511-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.164 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":21,"skipped":350,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:08:13.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:08:14.016: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-d8cc9355-59d1-42b6-90df-cbd1778ce323" in namespace "security-context-test-2459" to be "Succeeded or Failed" May 7 00:08:14.182: INFO: Pod "alpine-nnp-false-d8cc9355-59d1-42b6-90df-cbd1778ce323": Phase="Pending", Reason="", readiness=false. Elapsed: 165.693805ms May 7 00:08:16.185: INFO: Pod "alpine-nnp-false-d8cc9355-59d1-42b6-90df-cbd1778ce323": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169061362s May 7 00:08:18.236: INFO: Pod "alpine-nnp-false-d8cc9355-59d1-42b6-90df-cbd1778ce323": Phase="Pending", Reason="", readiness=false. Elapsed: 4.219845732s May 7 00:08:20.243: INFO: Pod "alpine-nnp-false-d8cc9355-59d1-42b6-90df-cbd1778ce323": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.226858451s May 7 00:08:20.243: INFO: Pod "alpine-nnp-false-d8cc9355-59d1-42b6-90df-cbd1778ce323" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:08:20.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2459" for this suite. • [SLOW TEST:7.120 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:08:20.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:08:21.645: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:08:23.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406901, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406901, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406901, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406901, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:08:25.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406901, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406901, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406901, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406901, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:08:28.744: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:08:29.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6370" for this suite. STEP: Destroying namespace "webhook-6370-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.165 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":23,"skipped":380,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:08:29.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 7 00:08:30.377: INFO: Pod name pod-release: Found 0 pods out of 1 May 7 00:08:35.382: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:08:36.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-245" for this suite. • [SLOW TEST:6.556 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":24,"skipped":394,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:08:36.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1439 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1439 STEP: creating replication controller externalsvc in namespace services-1439 I0507 00:08:36.698806 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1439, replica count: 2 I0507 00:08:39.749259 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:08:42.749499 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 7 00:08:42.836: INFO: Creating new exec pod May 7 00:08:46.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1439 execpod5hbr6 -- /bin/sh -x -c nslookup clusterip-service' May 7 00:08:47.149: INFO: stderr: "I0507 00:08:47.048585 370 log.go:172] (0xc00003b4a0) (0xc00031f360) Create stream\nI0507 00:08:47.048656 370 log.go:172] (0xc00003b4a0) (0xc00031f360) Stream added, broadcasting: 1\nI0507 00:08:47.053958 370 log.go:172] (0xc00003b4a0) Reply frame received for 1\nI0507 00:08:47.054014 370 log.go:172] (0xc00003b4a0) (0xc000876dc0) Create stream\nI0507 00:08:47.054049 370 log.go:172] (0xc00003b4a0) (0xc000876dc0) Stream added, broadcasting: 3\nI0507 00:08:47.055097 370 log.go:172] (0xc00003b4a0) Reply frame received for 3\nI0507 00:08:47.055147 370 log.go:172] (0xc00003b4a0) (0xc00024f860) Create stream\nI0507 00:08:47.055163 370 log.go:172] (0xc00003b4a0) (0xc00024f860) Stream added, broadcasting: 5\nI0507 00:08:47.056252 370 log.go:172] (0xc00003b4a0) Reply frame received for 5\nI0507 00:08:47.135776 370 log.go:172] (0xc00003b4a0) Data frame received for 5\nI0507 00:08:47.135807 370 log.go:172] (0xc00024f860) (5) Data frame handling\nI0507 00:08:47.135829 370 log.go:172] (0xc00024f860) (5) Data frame sent\n+ nslookup clusterip-service\nI0507 00:08:47.141605 370 log.go:172] (0xc00003b4a0) Data frame received for 3\nI0507 00:08:47.141625 370 log.go:172] (0xc000876dc0) (3) Data frame handling\nI0507 00:08:47.141650 370 log.go:172] (0xc000876dc0) (3) Data frame sent\nI0507 00:08:47.142414 370 log.go:172] (0xc00003b4a0) Data frame received for 3\nI0507 00:08:47.142428 370 log.go:172] (0xc000876dc0) (3) Data frame handling\nI0507 00:08:47.142447 370 log.go:172] (0xc000876dc0) (3) Data frame sent\nI0507 00:08:47.142961 370 log.go:172] (0xc00003b4a0) Data frame received for 3\nI0507 00:08:47.142979 370 log.go:172] (0xc000876dc0) (3) Data frame handling\nI0507 00:08:47.143071 370 log.go:172] (0xc00003b4a0) Data frame received for 5\nI0507 00:08:47.143100 370 log.go:172] (0xc00024f860) (5) Data frame handling\nI0507 00:08:47.144732 370 log.go:172] (0xc00003b4a0) Data frame received for 1\nI0507 00:08:47.144762 370 log.go:172] (0xc00031f360) (1) Data frame handling\nI0507 00:08:47.144795 370 log.go:172] (0xc00031f360) (1) Data frame sent\nI0507 00:08:47.144821 370 log.go:172] (0xc00003b4a0) (0xc00031f360) Stream removed, broadcasting: 1\nI0507 00:08:47.144894 370 log.go:172] (0xc00003b4a0) Go away received\nI0507 00:08:47.145335 370 log.go:172] (0xc00003b4a0) (0xc00031f360) Stream removed, broadcasting: 1\nI0507 00:08:47.145353 370 log.go:172] (0xc00003b4a0) (0xc000876dc0) Stream removed, broadcasting: 3\nI0507 00:08:47.145367 370 log.go:172] (0xc00003b4a0) (0xc00024f860) Stream removed, broadcasting: 5\n" May 7 00:08:47.149: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1439.svc.cluster.local\tcanonical name = externalsvc.services-1439.svc.cluster.local.\nName:\texternalsvc.services-1439.svc.cluster.local\nAddress: 10.105.210.53\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1439, will wait for the garbage collector to delete the pods May 7 00:08:47.211: INFO: Deleting ReplicationController externalsvc took: 7.573258ms May 7 00:08:47.311: INFO: Terminating ReplicationController externalsvc pods took: 100.193391ms May 7 00:08:55.025: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:08:55.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1439" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:18.722 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":25,"skipped":404,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:08:55.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 7 00:09:01.883: INFO: Successfully updated pod "annotationupdatedfdae462-7c61-4d03-a300-8c9f431167d2" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:09:06.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3744" for this suite. • [SLOW TEST:11.021 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":26,"skipped":409,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:09:06.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0507 00:09:47.010689 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 00:09:47.010: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:09:47.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1500" for this suite. • [SLOW TEST:40.848 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":27,"skipped":416,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:09:47.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 7 00:09:47.140: INFO: Waiting up to 5m0s for pod "pod-89da1a31-4a29-4c86-9e71-7535d97ee9aa" in namespace "emptydir-6321" to be "Succeeded or Failed" May 7 00:09:47.202: INFO: Pod "pod-89da1a31-4a29-4c86-9e71-7535d97ee9aa": Phase="Pending", Reason="", readiness=false. Elapsed: 62.444908ms May 7 00:09:49.205: INFO: Pod "pod-89da1a31-4a29-4c86-9e71-7535d97ee9aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065585292s May 7 00:09:51.208: INFO: Pod "pod-89da1a31-4a29-4c86-9e71-7535d97ee9aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068501874s STEP: Saw pod success May 7 00:09:51.208: INFO: Pod "pod-89da1a31-4a29-4c86-9e71-7535d97ee9aa" satisfied condition "Succeeded or Failed" May 7 00:09:51.210: INFO: Trying to get logs from node latest-worker2 pod pod-89da1a31-4a29-4c86-9e71-7535d97ee9aa container test-container: STEP: delete the pod May 7 00:09:51.413: INFO: Waiting for pod pod-89da1a31-4a29-4c86-9e71-7535d97ee9aa to disappear May 7 00:09:51.463: INFO: Pod pod-89da1a31-4a29-4c86-9e71-7535d97ee9aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:09:51.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6321" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":28,"skipped":417,"failed":0} ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:09:51.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 7 00:09:51.645: INFO: Waiting up to 5m0s for pod "downward-api-ac3a61ce-bc47-4872-837b-6ac2d0024a1f" in namespace "downward-api-7346" to be "Succeeded or Failed" May 7 00:09:51.706: INFO: Pod "downward-api-ac3a61ce-bc47-4872-837b-6ac2d0024a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 60.72886ms May 7 00:09:53.766: INFO: Pod "downward-api-ac3a61ce-bc47-4872-837b-6ac2d0024a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120721303s May 7 00:09:56.018: INFO: Pod "downward-api-ac3a61ce-bc47-4872-837b-6ac2d0024a1f": Phase="Running", Reason="", readiness=true. Elapsed: 4.373387232s May 7 00:09:58.391: INFO: Pod "downward-api-ac3a61ce-bc47-4872-837b-6ac2d0024a1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.74622179s STEP: Saw pod success May 7 00:09:58.391: INFO: Pod "downward-api-ac3a61ce-bc47-4872-837b-6ac2d0024a1f" satisfied condition "Succeeded or Failed" May 7 00:09:58.394: INFO: Trying to get logs from node latest-worker2 pod downward-api-ac3a61ce-bc47-4872-837b-6ac2d0024a1f container dapi-container: STEP: delete the pod May 7 00:09:59.202: INFO: Waiting for pod downward-api-ac3a61ce-bc47-4872-837b-6ac2d0024a1f to disappear May 7 00:09:59.532: INFO: Pod downward-api-ac3a61ce-bc47-4872-837b-6ac2d0024a1f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:09:59.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7346" for this suite. • [SLOW TEST:8.033 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:09:59.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:10:00.586: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 7 00:10:02.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6765 create -f -' May 7 00:10:10.839: INFO: stderr: "" May 7 00:10:10.839: INFO: stdout: "e2e-test-crd-publish-openapi-9338-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 7 00:10:10.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6765 delete e2e-test-crd-publish-openapi-9338-crds test-cr' May 7 00:10:10.959: INFO: stderr: "" May 7 00:10:10.959: INFO: stdout: "e2e-test-crd-publish-openapi-9338-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 7 00:10:10.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6765 apply -f -' May 7 00:10:11.786: INFO: stderr: "" May 7 00:10:11.786: INFO: stdout: "e2e-test-crd-publish-openapi-9338-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 7 00:10:11.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6765 delete e2e-test-crd-publish-openapi-9338-crds test-cr' May 7 00:10:11.912: INFO: stderr: "" May 7 00:10:11.912: INFO: stdout: "e2e-test-crd-publish-openapi-9338-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 7 00:10:11.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9338-crds' May 7 00:10:12.260: INFO: stderr: "" May 7 00:10:12.260: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9338-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:10:15.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6765" for this suite. • [SLOW TEST:15.644 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":30,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:10:15.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:10:15.382: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc328d5d-f667-4141-bd45-42ad44d90a66" in namespace "downward-api-1762" to be "Succeeded or Failed" May 7 00:10:15.463: INFO: Pod "downwardapi-volume-dc328d5d-f667-4141-bd45-42ad44d90a66": Phase="Pending", Reason="", readiness=false. Elapsed: 81.363017ms May 7 00:10:17.467: INFO: Pod "downwardapi-volume-dc328d5d-f667-4141-bd45-42ad44d90a66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084997323s May 7 00:10:19.470: INFO: Pod "downwardapi-volume-dc328d5d-f667-4141-bd45-42ad44d90a66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088130639s STEP: Saw pod success May 7 00:10:19.470: INFO: Pod "downwardapi-volume-dc328d5d-f667-4141-bd45-42ad44d90a66" satisfied condition "Succeeded or Failed" May 7 00:10:19.472: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dc328d5d-f667-4141-bd45-42ad44d90a66 container client-container: STEP: delete the pod May 7 00:10:19.575: INFO: Waiting for pod downwardapi-volume-dc328d5d-f667-4141-bd45-42ad44d90a66 to disappear May 7 00:10:19.587: INFO: Pod downwardapi-volume-dc328d5d-f667-4141-bd45-42ad44d90a66 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:10:19.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1762" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:10:19.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:10:20.304: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:10:22.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407020, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407020, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407020, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407020, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:10:25.360: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:10:25.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1127" for this suite. STEP: Destroying namespace "webhook-1127-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.116 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":32,"skipped":522,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:10:25.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:10:26.997: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:10:29.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407026, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407026, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407027, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407026, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:10:31.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407026, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407026, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407027, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407026, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:10:34.044: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:10:34.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7161" for this suite. STEP: Destroying namespace "webhook-7161-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.839 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":33,"skipped":526,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:10:34.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 7 00:10:35.248: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:35.400: INFO: Number of nodes with available pods: 0 May 7 00:10:35.400: INFO: Node latest-worker is running more than one daemon pod May 7 00:10:36.491: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:36.823: INFO: Number of nodes with available pods: 0 May 7 00:10:36.824: INFO: Node latest-worker is running more than one daemon pod May 7 00:10:37.443: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:37.873: INFO: Number of nodes with available pods: 0 May 7 00:10:37.873: INFO: Node latest-worker is running more than one daemon pod May 7 00:10:38.406: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:38.409: INFO: Number of nodes with available pods: 0 May 7 00:10:38.410: INFO: Node latest-worker is running more than one daemon pod May 7 00:10:39.924: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:40.533: INFO: Number of nodes with available pods: 0 May 7 00:10:40.533: INFO: Node latest-worker is running more than one daemon pod May 7 00:10:41.845: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:42.107: INFO: Number of nodes with available pods: 0 May 7 00:10:42.107: INFO: Node latest-worker is running more than one daemon pod May 7 00:10:42.923: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:43.365: INFO: Number of nodes with available pods: 1 May 7 00:10:43.365: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:10:43.461: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:43.922: INFO: Number of nodes with available pods: 1 May 7 00:10:43.922: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:10:44.404: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:44.407: INFO: Number of nodes with available pods: 1 May 7 00:10:44.407: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:10:45.719: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:45.723: INFO: Number of nodes with available pods: 1 May 7 00:10:45.723: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:10:46.646: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:46.987: INFO: Number of nodes with available pods: 1 May 7 00:10:46.987: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:10:47.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:47.676: INFO: Number of nodes with available pods: 1 May 7 00:10:47.676: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:10:48.485: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:48.605: INFO: Number of nodes with available pods: 1 May 7 00:10:48.605: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:10:49.437: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:49.440: INFO: Number of nodes with available pods: 2 May 7 00:10:49.440: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 7 00:10:49.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:10:49.599: INFO: Number of nodes with available pods: 2 May 7 00:10:49.599: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6744, will wait for the garbage collector to delete the pods May 7 00:10:50.892: INFO: Deleting DaemonSet.extensions daemon-set took: 6.129543ms May 7 00:10:51.193: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.233622ms May 7 00:11:05.568: INFO: Number of nodes with available pods: 0 May 7 00:11:05.568: INFO: Number of running nodes: 0, number of available pods: 0 May 7 00:11:05.570: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6744/daemonsets","resourceVersion":"2163172"},"items":null} May 7 00:11:05.573: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6744/pods","resourceVersion":"2163172"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:05.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6744" for this suite. • [SLOW TEST:31.036 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":34,"skipped":528,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:05.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:11:06.467: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:11:08.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407066, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407066, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407066, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407066, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:11:10.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407066, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407066, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407066, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407066, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:11:14.036: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:11:14.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6260-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:15.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4967" for this suite. STEP: Destroying namespace "webhook-4967-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.098 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":35,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:15.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 7 00:11:16.034: INFO: Waiting up to 5m0s for pod "client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3" in namespace "containers-6410" to be "Succeeded or Failed" May 7 00:11:16.071: INFO: Pod "client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 36.86861ms May 7 00:11:18.089: INFO: Pod "client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055501727s May 7 00:11:20.094: INFO: Pod "client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059862591s May 7 00:11:22.101: INFO: Pod "client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067004593s May 7 00:11:24.125: INFO: Pod "client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091406613s STEP: Saw pod success May 7 00:11:24.125: INFO: Pod "client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3" satisfied condition "Succeeded or Failed" May 7 00:11:24.128: INFO: Trying to get logs from node latest-worker2 pod client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3 container test-container: STEP: delete the pod May 7 00:11:24.175: INFO: Waiting for pod client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3 to disappear May 7 00:11:24.214: INFO: Pod client-containers-ef1b3071-c68a-4e8b-b443-b29c47b49ad3 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:24.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6410" for this suite. • [SLOW TEST:8.536 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":36,"skipped":586,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:24.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 7 00:11:24.444: INFO: Waiting up to 5m0s for pod "client-containers-73c62426-cc8c-4be0-b239-b0726579b047" in namespace "containers-6713" to be "Succeeded or Failed" May 7 00:11:24.455: INFO: Pod "client-containers-73c62426-cc8c-4be0-b239-b0726579b047": Phase="Pending", Reason="", readiness=false. Elapsed: 11.011232ms May 7 00:11:26.466: INFO: Pod "client-containers-73c62426-cc8c-4be0-b239-b0726579b047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022727258s May 7 00:11:28.471: INFO: Pod "client-containers-73c62426-cc8c-4be0-b239-b0726579b047": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027248421s May 7 00:11:30.475: INFO: Pod "client-containers-73c62426-cc8c-4be0-b239-b0726579b047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030983378s STEP: Saw pod success May 7 00:11:30.475: INFO: Pod "client-containers-73c62426-cc8c-4be0-b239-b0726579b047" satisfied condition "Succeeded or Failed" May 7 00:11:30.477: INFO: Trying to get logs from node latest-worker2 pod client-containers-73c62426-cc8c-4be0-b239-b0726579b047 container test-container: STEP: delete the pod May 7 00:11:30.497: INFO: Waiting for pod client-containers-73c62426-cc8c-4be0-b239-b0726579b047 to disappear May 7 00:11:30.501: INFO: Pod client-containers-73c62426-cc8c-4be0-b239-b0726579b047 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:30.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6713" for this suite. • [SLOW TEST:6.284 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":37,"skipped":589,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:30.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:30.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3374" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":38,"skipped":594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:30.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:30.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7363" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":39,"skipped":620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:30.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-7115/configmap-test-534b7fd5-109b-44d9-9c52-43c665594775 STEP: Creating a pod to test consume configMaps May 7 00:11:31.031: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c7fb286-00a7-4f43-afd5-ee6c7aeb4d43" in namespace "configmap-7115" to be "Succeeded or Failed" May 7 00:11:31.047: INFO: Pod "pod-configmaps-6c7fb286-00a7-4f43-afd5-ee6c7aeb4d43": Phase="Pending", Reason="", readiness=false. Elapsed: 15.703313ms May 7 00:11:33.137: INFO: Pod "pod-configmaps-6c7fb286-00a7-4f43-afd5-ee6c7aeb4d43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106331654s May 7 00:11:35.141: INFO: Pod "pod-configmaps-6c7fb286-00a7-4f43-afd5-ee6c7aeb4d43": Phase="Running", Reason="", readiness=true. Elapsed: 4.110468692s May 7 00:11:37.146: INFO: Pod "pod-configmaps-6c7fb286-00a7-4f43-afd5-ee6c7aeb4d43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114670437s STEP: Saw pod success May 7 00:11:37.146: INFO: Pod "pod-configmaps-6c7fb286-00a7-4f43-afd5-ee6c7aeb4d43" satisfied condition "Succeeded or Failed" May 7 00:11:37.149: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6c7fb286-00a7-4f43-afd5-ee6c7aeb4d43 container env-test: STEP: delete the pod May 7 00:11:37.190: INFO: Waiting for pod pod-configmaps-6c7fb286-00a7-4f43-afd5-ee6c7aeb4d43 to disappear May 7 00:11:37.196: INFO: Pod pod-configmaps-6c7fb286-00a7-4f43-afd5-ee6c7aeb4d43 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:37.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7115" for this suite. • [SLOW TEST:6.411 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":644,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:37.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:11:37.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 7 00:11:37.444: INFO: stderr: "" May 7 00:11:37.444: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:37.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7969" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":41,"skipped":657,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:37.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:11:37.561: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 7 00:11:42.641: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 7 00:11:42.641: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 7 00:11:47.107: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8972 /apis/apps/v1/namespaces/deployment-8972/deployments/test-cleanup-deployment 2ce9dafb-aac5-4ea9-99fc-a493f38a3599 2163536 1 2020-05-07 00:11:42 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-05-07 00:11:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-07 00:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004830db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-07 00:11:43 +0000 UTC,LastTransitionTime:2020-05-07 00:11:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-6688745694" has successfully progressed.,LastUpdateTime:2020-05-07 00:11:46 +0000 UTC,LastTransitionTime:2020-05-07 00:11:43 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 7 00:11:47.112: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-8972 /apis/apps/v1/namespaces/deployment-8972/replicasets/test-cleanup-deployment-6688745694 8195432b-d376-4580-9810-a0f739dac226 2163525 1 2020-05-07 00:11:42 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2ce9dafb-aac5-4ea9-99fc-a493f38a3599 0xc004831487 0xc004831488}] [] [{kube-controller-manager Update apps/v1 2020-05-07 00:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ce9dafb-aac5-4ea9-99fc-a493f38a3599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004831518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 7 00:11:47.114: INFO: Pod "test-cleanup-deployment-6688745694-jpvsw" is available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-jpvsw test-cleanup-deployment-6688745694- deployment-8972 /api/v1/namespaces/deployment-8972/pods/test-cleanup-deployment-6688745694-jpvsw d01717d0-218b-4fc0-a7c8-95dd2061e136 2163524 0 2020-05-07 00:11:43 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 8195432b-d376-4580-9810-a0f739dac226 0xc004831907 0xc004831908}] [] [{kube-controller-manager Update v1 2020-05-07 00:11:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8195432b-d376-4580-9810-a0f739dac226\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:11:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.120\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-trgwl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-trgwl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-trgwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:11:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:11:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:11:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:11:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.120,StartTime:2020-05-07 00:11:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:11:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://acac7b12f97c31a037fdde5c0e671c70e77c85870a9c1f92b1c10d6815a96a78,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:47.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8972" for this suite. • [SLOW TEST:9.669 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":42,"skipped":658,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:47.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:11:47.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f164dc74-53ce-42a4-8ac5-3ca930da5610" in namespace "projected-4184" to be "Succeeded or Failed" May 7 00:11:47.472: INFO: Pod "downwardapi-volume-f164dc74-53ce-42a4-8ac5-3ca930da5610": Phase="Pending", Reason="", readiness=false. Elapsed: 9.001786ms May 7 00:11:49.611: INFO: Pod "downwardapi-volume-f164dc74-53ce-42a4-8ac5-3ca930da5610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147576214s May 7 00:11:51.615: INFO: Pod "downwardapi-volume-f164dc74-53ce-42a4-8ac5-3ca930da5610": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152128131s May 7 00:11:53.640: INFO: Pod "downwardapi-volume-f164dc74-53ce-42a4-8ac5-3ca930da5610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177357812s STEP: Saw pod success May 7 00:11:53.641: INFO: Pod "downwardapi-volume-f164dc74-53ce-42a4-8ac5-3ca930da5610" satisfied condition "Succeeded or Failed" May 7 00:11:53.644: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f164dc74-53ce-42a4-8ac5-3ca930da5610 container client-container: STEP: delete the pod May 7 00:11:53.809: INFO: Waiting for pod downwardapi-volume-f164dc74-53ce-42a4-8ac5-3ca930da5610 to disappear May 7 00:11:53.826: INFO: Pod downwardapi-volume-f164dc74-53ce-42a4-8ac5-3ca930da5610 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:11:53.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4184" for this suite. • [SLOW TEST:6.722 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":664,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:11:53.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 7 00:11:53.962: INFO: Waiting up to 5m0s for pod "pod-2e895336-3538-4106-99e0-eb5dcfe42b5b" in namespace "emptydir-8511" to be "Succeeded or Failed" May 7 00:11:54.000: INFO: Pod "pod-2e895336-3538-4106-99e0-eb5dcfe42b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.039576ms May 7 00:11:56.003: INFO: Pod "pod-2e895336-3538-4106-99e0-eb5dcfe42b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041455397s May 7 00:11:58.144: INFO: Pod "pod-2e895336-3538-4106-99e0-eb5dcfe42b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182197039s May 7 00:12:00.185: INFO: Pod "pod-2e895336-3538-4106-99e0-eb5dcfe42b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223844566s May 7 00:12:02.390: INFO: Pod "pod-2e895336-3538-4106-99e0-eb5dcfe42b5b": Phase="Running", Reason="", readiness=true. Elapsed: 8.42790866s May 7 00:12:04.522: INFO: Pod "pod-2e895336-3538-4106-99e0-eb5dcfe42b5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.559987693s STEP: Saw pod success May 7 00:12:04.522: INFO: Pod "pod-2e895336-3538-4106-99e0-eb5dcfe42b5b" satisfied condition "Succeeded or Failed" May 7 00:12:04.898: INFO: Trying to get logs from node latest-worker pod pod-2e895336-3538-4106-99e0-eb5dcfe42b5b container test-container: STEP: delete the pod May 7 00:12:05.602: INFO: Waiting for pod pod-2e895336-3538-4106-99e0-eb5dcfe42b5b to disappear May 7 00:12:06.321: INFO: Pod pod-2e895336-3538-4106-99e0-eb5dcfe42b5b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:12:06.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8511" for this suite. • [SLOW TEST:12.483 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":44,"skipped":670,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:12:06.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:12:07.153: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 7 00:12:08.841: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:12:09.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1552" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":45,"skipped":683,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:12:09.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:12:14.326: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:12:18.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407132, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407132, loc:(*time.Location)(0x7c342a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-75dd644756\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407134, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407134, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} May 7 00:12:21.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407134, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407134, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407138, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407132, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:12:23.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407134, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407134, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407138, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407132, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:12:25.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407134, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407134, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407138, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407132, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:12:29.559: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 7 00:12:38.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-4993 to-be-attached-pod -i -c=container1' May 7 00:12:38.146: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:12:38.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4993" for this suite. STEP: Destroying namespace "webhook-4993-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:29.850 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":46,"skipped":701,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:12:39.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:12:40.730: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 7 00:12:43.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-856 create -f -' May 7 00:12:50.212: INFO: stderr: "" May 7 00:12:50.212: INFO: stdout: "e2e-test-crd-publish-openapi-2560-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 7 00:12:50.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-856 delete e2e-test-crd-publish-openapi-2560-crds test-cr' May 7 00:12:50.374: INFO: stderr: "" May 7 00:12:50.374: INFO: stdout: "e2e-test-crd-publish-openapi-2560-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 7 00:12:50.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-856 apply -f -' May 7 00:12:50.722: INFO: stderr: "" May 7 00:12:50.722: INFO: stdout: "e2e-test-crd-publish-openapi-2560-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 7 00:12:50.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-856 delete e2e-test-crd-publish-openapi-2560-crds test-cr' May 7 00:12:50.859: INFO: stderr: "" May 7 00:12:50.859: INFO: stdout: "e2e-test-crd-publish-openapi-2560-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 7 00:12:50.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2560-crds' May 7 00:12:51.140: INFO: stderr: "" May 7 00:12:51.140: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2560-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:12:54.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-856" for this suite. • [SLOW TEST:14.287 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":47,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:12:54.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 7 00:12:58.670: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6dc2fc3f-67f3-4438-a4f9-336ce1bc4a7f" May 7 00:12:58.671: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6dc2fc3f-67f3-4438-a4f9-336ce1bc4a7f" in namespace "pods-3829" to be "terminated due to deadline exceeded" May 7 00:12:58.678: INFO: Pod "pod-update-activedeadlineseconds-6dc2fc3f-67f3-4438-a4f9-336ce1bc4a7f": Phase="Running", Reason="", readiness=true. Elapsed: 7.691535ms May 7 00:13:00.683: INFO: Pod "pod-update-activedeadlineseconds-6dc2fc3f-67f3-4438-a4f9-336ce1bc4a7f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.012121178s May 7 00:13:00.683: INFO: Pod "pod-update-activedeadlineseconds-6dc2fc3f-67f3-4438-a4f9-336ce1bc4a7f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:13:00.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3829" for this suite. • [SLOW TEST:6.606 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":48,"skipped":733,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:13:00.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:13:00.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0f5d3ec-7347-4044-8ed2-2599484a2576" in namespace "projected-2833" to be "Succeeded or Failed" May 7 00:13:00.816: INFO: Pod "downwardapi-volume-a0f5d3ec-7347-4044-8ed2-2599484a2576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.599308ms May 7 00:13:02.835: INFO: Pod "downwardapi-volume-a0f5d3ec-7347-4044-8ed2-2599484a2576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021660332s May 7 00:13:04.844: INFO: Pod "downwardapi-volume-a0f5d3ec-7347-4044-8ed2-2599484a2576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030510633s STEP: Saw pod success May 7 00:13:04.844: INFO: Pod "downwardapi-volume-a0f5d3ec-7347-4044-8ed2-2599484a2576" satisfied condition "Succeeded or Failed" May 7 00:13:04.846: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a0f5d3ec-7347-4044-8ed2-2599484a2576 container client-container: STEP: delete the pod May 7 00:13:04.911: INFO: Waiting for pod downwardapi-volume-a0f5d3ec-7347-4044-8ed2-2599484a2576 to disappear May 7 00:13:04.918: INFO: Pod downwardapi-volume-a0f5d3ec-7347-4044-8ed2-2599484a2576 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:13:04.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2833" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":49,"skipped":761,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:13:04.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-3d565780-9a98-47a8-9398-b048a9693ea6 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:13:11.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-444" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":50,"skipped":774,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:13:11.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:13:43.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7385" for this suite. STEP: Destroying namespace "nsdeletetest-7905" for this suite. May 7 00:13:43.379: INFO: Namespace nsdeletetest-7905 was already deleted STEP: Destroying namespace "nsdeletetest-4002" for this suite. • [SLOW TEST:32.213 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":51,"skipped":779,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:13:43.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:13:44.140: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:13:46.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407224, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407224, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407224, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407224, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:13:49.222: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:14:01.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4669" for this suite. STEP: Destroying namespace "webhook-4669-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.131 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":52,"skipped":789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:14:01.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 7 00:14:05.746: INFO: Pod pod-hostip-f1197aa0-52f4-410f-bce3-f8d86b548a3b has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:14:05.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3230" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":53,"skipped":815,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:14:05.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 7 00:14:06.436: INFO: Waiting up to 5m0s for pod "client-containers-bc385fcc-0bc8-44e1-be5f-200bf1f05ae4" in namespace "containers-6581" to be "Succeeded or Failed" May 7 00:14:06.516: INFO: Pod "client-containers-bc385fcc-0bc8-44e1-be5f-200bf1f05ae4": Phase="Pending", Reason="", readiness=false. Elapsed: 79.342099ms May 7 00:14:08.520: INFO: Pod "client-containers-bc385fcc-0bc8-44e1-be5f-200bf1f05ae4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083152686s May 7 00:14:10.524: INFO: Pod "client-containers-bc385fcc-0bc8-44e1-be5f-200bf1f05ae4": Phase="Running", Reason="", readiness=true. Elapsed: 4.087481233s May 7 00:14:12.528: INFO: Pod "client-containers-bc385fcc-0bc8-44e1-be5f-200bf1f05ae4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091702633s STEP: Saw pod success May 7 00:14:12.528: INFO: Pod "client-containers-bc385fcc-0bc8-44e1-be5f-200bf1f05ae4" satisfied condition "Succeeded or Failed" May 7 00:14:12.531: INFO: Trying to get logs from node latest-worker pod client-containers-bc385fcc-0bc8-44e1-be5f-200bf1f05ae4 container test-container: STEP: delete the pod May 7 00:14:12.585: INFO: Waiting for pod client-containers-bc385fcc-0bc8-44e1-be5f-200bf1f05ae4 to disappear May 7 00:14:12.591: INFO: Pod client-containers-bc385fcc-0bc8-44e1-be5f-200bf1f05ae4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:14:12.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6581" for this suite. • [SLOW TEST:6.636 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":54,"skipped":831,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:14:12.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:14:29.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6803" for this suite. • [SLOW TEST:16.486 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":55,"skipped":839,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:14:29.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:14:36.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7683" for this suite. • [SLOW TEST:7.125 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":56,"skipped":840,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:14:36.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:14:36.978: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:14:38.990: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407276, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407276, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407277, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407276, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:14:42.047: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:14:42.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1448" for this suite. STEP: Destroying namespace "webhook-1448-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.248 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":57,"skipped":883,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:14:43.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6422 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 00:14:43.747: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 7 00:14:44.138: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 00:14:46.271: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 00:14:48.238: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 00:14:50.142: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:14:52.141: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:14:54.142: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:14:56.142: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:14:58.142: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:15:00.142: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:15:02.157: INFO: The status of Pod netserver-0 is Running (Ready = true) May 7 00:15:02.163: INFO: The status of Pod netserver-1 is Running (Ready = false) May 7 00:15:04.167: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 7 00:15:10.218: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.129:8080/dial?request=hostname&protocol=http&host=10.244.1.36&port=8080&tries=1'] Namespace:pod-network-test-6422 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:15:10.218: INFO: >>> kubeConfig: /root/.kube/config I0507 00:15:10.255639 7 log.go:172] (0xc001fd6420) (0xc0004ed2c0) Create stream I0507 00:15:10.255675 7 log.go:172] (0xc001fd6420) (0xc0004ed2c0) Stream added, broadcasting: 1 I0507 00:15:10.258398 7 log.go:172] (0xc001fd6420) Reply frame received for 1 I0507 00:15:10.258459 7 log.go:172] (0xc001fd6420) (0xc001256140) Create stream I0507 00:15:10.258481 7 log.go:172] (0xc001fd6420) (0xc001256140) Stream added, broadcasting: 3 I0507 00:15:10.259594 7 log.go:172] (0xc001fd6420) Reply frame received for 3 I0507 00:15:10.259621 7 log.go:172] (0xc001fd6420) (0xc0004edf40) Create stream I0507 00:15:10.259630 7 log.go:172] (0xc001fd6420) (0xc0004edf40) Stream added, broadcasting: 5 I0507 00:15:10.260495 7 log.go:172] (0xc001fd6420) Reply frame received for 5 I0507 00:15:10.349972 7 log.go:172] (0xc001fd6420) Data frame received for 3 I0507 00:15:10.350002 7 log.go:172] (0xc001256140) (3) Data frame handling I0507 00:15:10.350033 7 log.go:172] (0xc001256140) (3) Data frame sent I0507 00:15:10.350633 7 log.go:172] (0xc001fd6420) Data frame received for 5 I0507 00:15:10.350673 7 log.go:172] (0xc0004edf40) (5) Data frame handling I0507 00:15:10.350698 7 log.go:172] (0xc001fd6420) Data frame received for 3 I0507 00:15:10.350711 7 log.go:172] (0xc001256140) (3) Data frame handling I0507 00:15:10.352229 7 log.go:172] (0xc001fd6420) Data frame received for 1 I0507 00:15:10.352259 7 log.go:172] (0xc0004ed2c0) (1) Data frame handling I0507 00:15:10.352277 7 log.go:172] (0xc0004ed2c0) (1) Data frame sent I0507 00:15:10.352297 7 log.go:172] (0xc001fd6420) (0xc0004ed2c0) Stream removed, broadcasting: 1 I0507 00:15:10.352411 7 log.go:172] (0xc001fd6420) Go away received I0507 00:15:10.352699 7 log.go:172] (0xc001fd6420) (0xc0004ed2c0) Stream removed, broadcasting: 1 I0507 00:15:10.352727 7 log.go:172] (0xc001fd6420) (0xc001256140) Stream removed, broadcasting: 3 I0507 00:15:10.352748 7 log.go:172] (0xc001fd6420) (0xc0004edf40) Stream removed, broadcasting: 5 May 7 00:15:10.352: INFO: Waiting for responses: map[] May 7 00:15:10.356: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.129:8080/dial?request=hostname&protocol=http&host=10.244.2.128&port=8080&tries=1'] Namespace:pod-network-test-6422 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:15:10.356: INFO: >>> kubeConfig: /root/.kube/config I0507 00:15:10.388596 7 log.go:172] (0xc002ebf810) (0xc000e0f9a0) Create stream I0507 00:15:10.388621 7 log.go:172] (0xc002ebf810) (0xc000e0f9a0) Stream added, broadcasting: 1 I0507 00:15:10.390586 7 log.go:172] (0xc002ebf810) Reply frame received for 1 I0507 00:15:10.390617 7 log.go:172] (0xc002ebf810) (0xc0015b0140) Create stream I0507 00:15:10.390627 7 log.go:172] (0xc002ebf810) (0xc0015b0140) Stream added, broadcasting: 3 I0507 00:15:10.391824 7 log.go:172] (0xc002ebf810) Reply frame received for 3 I0507 00:15:10.391880 7 log.go:172] (0xc002ebf810) (0xc000e0fae0) Create stream I0507 00:15:10.391902 7 log.go:172] (0xc002ebf810) (0xc000e0fae0) Stream added, broadcasting: 5 I0507 00:15:10.393080 7 log.go:172] (0xc002ebf810) Reply frame received for 5 I0507 00:15:10.474761 7 log.go:172] (0xc002ebf810) Data frame received for 3 I0507 00:15:10.474785 7 log.go:172] (0xc0015b0140) (3) Data frame handling I0507 00:15:10.474797 7 log.go:172] (0xc0015b0140) (3) Data frame sent I0507 00:15:10.475242 7 log.go:172] (0xc002ebf810) Data frame received for 5 I0507 00:15:10.475259 7 log.go:172] (0xc000e0fae0) (5) Data frame handling I0507 00:15:10.475525 7 log.go:172] (0xc002ebf810) Data frame received for 3 I0507 00:15:10.475553 7 log.go:172] (0xc0015b0140) (3) Data frame handling I0507 00:15:10.476666 7 log.go:172] (0xc002ebf810) Data frame received for 1 I0507 00:15:10.476684 7 log.go:172] (0xc000e0f9a0) (1) Data frame handling I0507 00:15:10.476690 7 log.go:172] (0xc000e0f9a0) (1) Data frame sent I0507 00:15:10.476703 7 log.go:172] (0xc002ebf810) (0xc000e0f9a0) Stream removed, broadcasting: 1 I0507 00:15:10.476733 7 log.go:172] (0xc002ebf810) Go away received I0507 00:15:10.476807 7 log.go:172] (0xc002ebf810) (0xc000e0f9a0) Stream removed, broadcasting: 1 I0507 00:15:10.476824 7 log.go:172] (0xc002ebf810) (0xc0015b0140) Stream removed, broadcasting: 3 I0507 00:15:10.476833 7 log.go:172] (0xc002ebf810) (0xc000e0fae0) Stream removed, broadcasting: 5 May 7 00:15:10.476: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:15:10.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6422" for this suite. • [SLOW TEST:27.024 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":58,"skipped":885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:15:10.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-94464715-9fb0-409f-ab85-b1435481ed05 May 7 00:15:10.562: INFO: Pod name my-hostname-basic-94464715-9fb0-409f-ab85-b1435481ed05: Found 0 pods out of 1 May 7 00:15:15.576: INFO: Pod name my-hostname-basic-94464715-9fb0-409f-ab85-b1435481ed05: Found 1 pods out of 1 May 7 00:15:15.576: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-94464715-9fb0-409f-ab85-b1435481ed05" are running May 7 00:15:15.582: INFO: Pod "my-hostname-basic-94464715-9fb0-409f-ab85-b1435481ed05-gtfcj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:15:10 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:15:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:15:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:15:10 +0000 UTC Reason: Message:}]) May 7 00:15:15.582: INFO: Trying to dial the pod May 7 00:15:20.594: INFO: Controller my-hostname-basic-94464715-9fb0-409f-ab85-b1435481ed05: Got expected result from replica 1 [my-hostname-basic-94464715-9fb0-409f-ab85-b1435481ed05-gtfcj]: "my-hostname-basic-94464715-9fb0-409f-ab85-b1435481ed05-gtfcj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:15:20.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2198" for this suite. • [SLOW TEST:10.117 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":59,"skipped":912,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:15:20.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-x79s STEP: Creating a pod to test atomic-volume-subpath May 7 00:15:21.153: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x79s" in namespace "subpath-5078" to be "Succeeded or Failed" May 7 00:15:21.247: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Pending", Reason="", readiness=false. Elapsed: 93.563704ms May 7 00:15:23.288: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13496745s May 7 00:15:25.349: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 4.195211381s May 7 00:15:27.353: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 6.199747318s May 7 00:15:29.357: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 8.20370148s May 7 00:15:31.366: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 10.212713431s May 7 00:15:33.398: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 12.244383298s May 7 00:15:35.402: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 14.248749272s May 7 00:15:37.405: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 16.252129875s May 7 00:15:39.410: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 18.256402473s May 7 00:15:41.414: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 20.260627438s May 7 00:15:43.417: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 22.263882657s May 7 00:15:45.421: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Running", Reason="", readiness=true. Elapsed: 24.267762136s May 7 00:15:47.425: INFO: Pod "pod-subpath-test-configmap-x79s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.271868193s STEP: Saw pod success May 7 00:15:47.425: INFO: Pod "pod-subpath-test-configmap-x79s" satisfied condition "Succeeded or Failed" May 7 00:15:47.428: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-x79s container test-container-subpath-configmap-x79s: STEP: delete the pod May 7 00:15:47.487: INFO: Waiting for pod pod-subpath-test-configmap-x79s to disappear May 7 00:15:47.492: INFO: Pod pod-subpath-test-configmap-x79s no longer exists STEP: Deleting pod pod-subpath-test-configmap-x79s May 7 00:15:47.492: INFO: Deleting pod "pod-subpath-test-configmap-x79s" in namespace "subpath-5078" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:15:47.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5078" for this suite. • [SLOW TEST:26.899 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":60,"skipped":931,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:15:47.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:15:47.540: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 7 00:15:50.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6817 create -f -' May 7 00:15:57.585: INFO: stderr: "" May 7 00:15:57.585: INFO: stdout: "e2e-test-crd-publish-openapi-1488-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 7 00:15:57.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6817 delete e2e-test-crd-publish-openapi-1488-crds test-foo' May 7 00:15:58.091: INFO: stderr: "" May 7 00:15:58.091: INFO: stdout: "e2e-test-crd-publish-openapi-1488-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 7 00:15:58.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6817 apply -f -' May 7 00:15:58.500: INFO: stderr: "" May 7 00:15:58.500: INFO: stdout: "e2e-test-crd-publish-openapi-1488-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 7 00:15:58.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6817 delete e2e-test-crd-publish-openapi-1488-crds test-foo' May 7 00:15:58.688: INFO: stderr: "" May 7 00:15:58.688: INFO: stdout: "e2e-test-crd-publish-openapi-1488-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 7 00:15:58.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6817 create -f -' May 7 00:15:59.143: INFO: rc: 1 May 7 00:15:59.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6817 apply -f -' May 7 00:15:59.681: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 7 00:15:59.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6817 create -f -' May 7 00:16:00.079: INFO: rc: 1 May 7 00:16:00.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6817 apply -f -' May 7 00:16:00.346: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 7 00:16:00.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1488-crds' May 7 00:16:00.679: INFO: stderr: "" May 7 00:16:00.679: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1488-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 7 00:16:00.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1488-crds.metadata' May 7 00:16:00.932: INFO: stderr: "" May 7 00:16:00.932: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1488-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 7 00:16:00.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1488-crds.spec' May 7 00:16:01.231: INFO: stderr: "" May 7 00:16:01.231: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1488-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 7 00:16:01.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1488-crds.spec.bars' May 7 00:16:01.512: INFO: stderr: "" May 7 00:16:01.512: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1488-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 7 00:16:01.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1488-crds.spec.bars2' May 7 00:16:01.775: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:16:03.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6817" for this suite. • [SLOW TEST:16.346 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":61,"skipped":938,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:16:03.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8588 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 00:16:03.957: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 7 00:16:04.019: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 00:16:06.024: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 00:16:08.024: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 00:16:10.024: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:16:12.024: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:16:14.024: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:16:16.024: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:16:18.024: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:16:20.024: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:16:22.024: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 00:16:24.024: INFO: The status of Pod netserver-0 is Running (Ready = true) May 7 00:16:24.030: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 7 00:16:28.089: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.38:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8588 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:16:28.090: INFO: >>> kubeConfig: /root/.kube/config I0507 00:16:28.121521 7 log.go:172] (0xc004856370) (0xc0011683c0) Create stream I0507 00:16:28.121552 7 log.go:172] (0xc004856370) (0xc0011683c0) Stream added, broadcasting: 1 I0507 00:16:28.125772 7 log.go:172] (0xc004856370) Reply frame received for 1 I0507 00:16:28.125817 7 log.go:172] (0xc004856370) (0xc0011685a0) Create stream I0507 00:16:28.125829 7 log.go:172] (0xc004856370) (0xc0011685a0) Stream added, broadcasting: 3 I0507 00:16:28.126744 7 log.go:172] (0xc004856370) Reply frame received for 3 I0507 00:16:28.126782 7 log.go:172] (0xc004856370) (0xc000a12960) Create stream I0507 00:16:28.126794 7 log.go:172] (0xc004856370) (0xc000a12960) Stream added, broadcasting: 5 I0507 00:16:28.127576 7 log.go:172] (0xc004856370) Reply frame received for 5 I0507 00:16:28.218430 7 log.go:172] (0xc004856370) Data frame received for 3 I0507 00:16:28.218460 7 log.go:172] (0xc0011685a0) (3) Data frame handling I0507 00:16:28.218471 7 log.go:172] (0xc0011685a0) (3) Data frame sent I0507 00:16:28.218572 7 log.go:172] (0xc004856370) Data frame received for 5 I0507 00:16:28.218602 7 log.go:172] (0xc000a12960) (5) Data frame handling I0507 00:16:28.219298 7 log.go:172] (0xc004856370) Data frame received for 3 I0507 00:16:28.219314 7 log.go:172] (0xc0011685a0) (3) Data frame handling I0507 00:16:28.221787 7 log.go:172] (0xc004856370) Data frame received for 1 I0507 00:16:28.221804 7 log.go:172] (0xc0011683c0) (1) Data frame handling I0507 00:16:28.221811 7 log.go:172] (0xc0011683c0) (1) Data frame sent I0507 00:16:28.221823 7 log.go:172] (0xc004856370) (0xc0011683c0) Stream removed, broadcasting: 1 I0507 00:16:28.221840 7 log.go:172] (0xc004856370) Go away received I0507 00:16:28.221999 7 log.go:172] (0xc004856370) (0xc0011683c0) Stream removed, broadcasting: 1 I0507 00:16:28.222019 7 log.go:172] (0xc004856370) (0xc0011685a0) Stream removed, broadcasting: 3 I0507 00:16:28.222028 7 log.go:172] (0xc004856370) (0xc000a12960) Stream removed, broadcasting: 5 May 7 00:16:28.222: INFO: Found all expected endpoints: [netserver-0] May 7 00:16:28.224: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.131:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8588 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:16:28.224: INFO: >>> kubeConfig: /root/.kube/config I0507 00:16:28.244851 7 log.go:172] (0xc00480a420) (0xc000a13400) Create stream I0507 00:16:28.244882 7 log.go:172] (0xc00480a420) (0xc000a13400) Stream added, broadcasting: 1 I0507 00:16:28.249668 7 log.go:172] (0xc00480a420) Reply frame received for 1 I0507 00:16:28.249743 7 log.go:172] (0xc00480a420) (0xc000a13680) Create stream I0507 00:16:28.249782 7 log.go:172] (0xc00480a420) (0xc000a13680) Stream added, broadcasting: 3 I0507 00:16:28.250929 7 log.go:172] (0xc00480a420) Reply frame received for 3 I0507 00:16:28.250992 7 log.go:172] (0xc00480a420) (0xc000ef9220) Create stream I0507 00:16:28.251032 7 log.go:172] (0xc00480a420) (0xc000ef9220) Stream added, broadcasting: 5 I0507 00:16:28.252137 7 log.go:172] (0xc00480a420) Reply frame received for 5 I0507 00:16:28.334057 7 log.go:172] (0xc00480a420) Data frame received for 3 I0507 00:16:28.334094 7 log.go:172] (0xc000a13680) (3) Data frame handling I0507 00:16:28.334111 7 log.go:172] (0xc000a13680) (3) Data frame sent I0507 00:16:28.334120 7 log.go:172] (0xc00480a420) Data frame received for 3 I0507 00:16:28.334125 7 log.go:172] (0xc000a13680) (3) Data frame handling I0507 00:16:28.334187 7 log.go:172] (0xc00480a420) Data frame received for 5 I0507 00:16:28.334216 7 log.go:172] (0xc000ef9220) (5) Data frame handling I0507 00:16:28.335782 7 log.go:172] (0xc00480a420) Data frame received for 1 I0507 00:16:28.335807 7 log.go:172] (0xc000a13400) (1) Data frame handling I0507 00:16:28.335823 7 log.go:172] (0xc000a13400) (1) Data frame sent I0507 00:16:28.335845 7 log.go:172] (0xc00480a420) (0xc000a13400) Stream removed, broadcasting: 1 I0507 00:16:28.335859 7 log.go:172] (0xc00480a420) Go away received I0507 00:16:28.335949 7 log.go:172] (0xc00480a420) (0xc000a13400) Stream removed, broadcasting: 1 I0507 00:16:28.335966 7 log.go:172] (0xc00480a420) (0xc000a13680) Stream removed, broadcasting: 3 I0507 00:16:28.335975 7 log.go:172] (0xc00480a420) (0xc000ef9220) Stream removed, broadcasting: 5 May 7 00:16:28.335: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:16:28.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8588" for this suite. • [SLOW TEST:24.519 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":62,"skipped":941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:16:28.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-7252838f-d336-4a7d-a0ee-b9ae6c29196a in namespace container-probe-5448 May 7 00:16:32.562: INFO: Started pod liveness-7252838f-d336-4a7d-a0ee-b9ae6c29196a in namespace container-probe-5448 STEP: checking the pod's current state and verifying that restartCount is present May 7 00:16:32.564: INFO: Initial restart count of pod liveness-7252838f-d336-4a7d-a0ee-b9ae6c29196a is 0 May 7 00:16:58.988: INFO: Restart count of pod container-probe-5448/liveness-7252838f-d336-4a7d-a0ee-b9ae6c29196a is now 1 (26.423456806s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:16:59.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5448" for this suite. • [SLOW TEST:30.827 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":965,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:16:59.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:16:59.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4d02da7-3b68-4d95-b192-0510122a9fed" in namespace "downward-api-1146" to be "Succeeded or Failed" May 7 00:16:59.511: INFO: Pod "downwardapi-volume-f4d02da7-3b68-4d95-b192-0510122a9fed": Phase="Pending", Reason="", readiness=false. Elapsed: 21.219264ms May 7 00:17:01.515: INFO: Pod "downwardapi-volume-f4d02da7-3b68-4d95-b192-0510122a9fed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025738005s May 7 00:17:03.520: INFO: Pod "downwardapi-volume-f4d02da7-3b68-4d95-b192-0510122a9fed": Phase="Running", Reason="", readiness=true. Elapsed: 4.030264147s May 7 00:17:05.524: INFO: Pod "downwardapi-volume-f4d02da7-3b68-4d95-b192-0510122a9fed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034781541s STEP: Saw pod success May 7 00:17:05.524: INFO: Pod "downwardapi-volume-f4d02da7-3b68-4d95-b192-0510122a9fed" satisfied condition "Succeeded or Failed" May 7 00:17:05.527: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f4d02da7-3b68-4d95-b192-0510122a9fed container client-container: STEP: delete the pod May 7 00:17:05.578: INFO: Waiting for pod downwardapi-volume-f4d02da7-3b68-4d95-b192-0510122a9fed to disappear May 7 00:17:05.618: INFO: Pod downwardapi-volume-f4d02da7-3b68-4d95-b192-0510122a9fed no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:17:05.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1146" for this suite. • [SLOW TEST:6.432 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":64,"skipped":967,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:17:05.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:17:23.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6734" for this suite. • [SLOW TEST:17.445 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":65,"skipped":968,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:17:23.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:17:23.335: INFO: Creating deployment "webserver-deployment" May 7 00:17:23.368: INFO: Waiting for observed generation 1 May 7 00:17:25.565: INFO: Waiting for all required pods to come up May 7 00:17:25.571: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 7 00:17:37.885: INFO: Waiting for deployment "webserver-deployment" to complete May 7 00:17:37.899: INFO: Updating deployment "webserver-deployment" with a non-existent image May 7 00:17:37.906: INFO: Updating deployment webserver-deployment May 7 00:17:37.906: INFO: Waiting for observed generation 2 May 7 00:17:40.427: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 7 00:17:42.680: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 7 00:17:42.683: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 7 00:17:42.691: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 7 00:17:42.691: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 7 00:17:42.696: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 7 00:17:42.701: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 7 00:17:42.701: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 7 00:17:42.707: INFO: Updating deployment webserver-deployment May 7 00:17:42.707: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 7 00:17:42.763: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 7 00:17:42.853: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 7 00:17:42.870: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2681 /apis/apps/v1/namespaces/deployment-2681/deployments/webserver-deployment 41969a34-57b3-4adc-8769-e67a3274b60b 2165594 3 2020-05-07 00:17:23 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0009e6cd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-07 00:17:42 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-07 00:17:42 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 7 00:17:42.947: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-2681 /apis/apps/v1/namespaces/deployment-2681/replicasets/webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 2165581 3 2020-05-07 00:17:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 41969a34-57b3-4adc-8769-e67a3274b60b 0xc0049e85a7 0xc0049e85a8}] [] [{kube-controller-manager Update apps/v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41969a34-57b3-4adc-8769-e67a3274b60b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049e8628 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 7 00:17:42.947: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 7 00:17:42.948: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-2681 /apis/apps/v1/namespaces/deployment-2681/replicasets/webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 2165578 3 2020-05-07 00:17:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 41969a34-57b3-4adc-8769-e67a3274b60b 0xc0049e8687 0xc0049e8688}] [] [{kube-controller-manager Update apps/v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41969a34-57b3-4adc-8769-e67a3274b60b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049e86f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 7 00:17:43.056: INFO: Pod "webserver-deployment-6676bcd6d4-bmsv2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bmsv2 webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-bmsv2 798aeb42-2657-4b64-8615-e63e9006b406 2165601 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252c8b7 0xc00252c8b8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.056: INFO: Pod "webserver-deployment-6676bcd6d4-bwrwh" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bwrwh webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-bwrwh 6e73ed22-3a79-4449-9fa7-f0afa1f2b2cf 2165650 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252c9f7 0xc00252c9f8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.057: INFO: Pod "webserver-deployment-6676bcd6d4-crnz2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-crnz2 webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-crnz2 fd556089-5efb-4599-a5f5-23d30ad8f242 2165546 0 2020-05-07 00:17:38 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252cbf7 0xc00252cbf8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-07 00:17:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.057: INFO: Pod "webserver-deployment-6676bcd6d4-ctnvx" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ctnvx webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-ctnvx 55334a48-3f5b-4663-8474-078a3cae2857 2165632 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252cdb7 0xc00252cdb8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.057: INFO: Pod "webserver-deployment-6676bcd6d4-dmvp7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dmvp7 webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-dmvp7 0f3362bb-baef-495f-98c2-8355a9f375b3 2165613 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252cf07 0xc00252cf08}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.057: INFO: Pod "webserver-deployment-6676bcd6d4-f6tvc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-f6tvc webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-f6tvc f93a1243-6cb1-40cb-8d8a-c834e6a7eb4d 2165629 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252d047 0xc00252d048}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.058: INFO: Pod "webserver-deployment-6676bcd6d4-f874l" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-f874l webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-f874l 2015df7c-3e44-42e3-b7a8-7511624be148 2165561 0 2020-05-07 00:17:38 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252d197 0xc00252d198}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-07 00:17:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.058: INFO: Pod "webserver-deployment-6676bcd6d4-jnx26" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jnx26 webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-jnx26 70d0f2cc-0926-4691-9322-90e45707a699 2165559 0 2020-05-07 00:17:38 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252d347 0xc00252d348}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-07 00:17:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.058: INFO: Pod "webserver-deployment-6676bcd6d4-krgxz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-krgxz webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-krgxz 7fdb324c-1cf0-4e7b-a28b-95ab59c104c7 2165626 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252d4f7 0xc00252d4f8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.058: INFO: Pod "webserver-deployment-6676bcd6d4-lbbwn" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lbbwn webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-lbbwn 11dc4e4e-8609-4cba-be9f-942c762fa1e7 2165537 0 2020-05-07 00:17:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252d637 0xc00252d638}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-07 00:17:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.058: INFO: Pod "webserver-deployment-6676bcd6d4-p7xpb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-p7xpb webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-p7xpb 671a5d71-9e64-421c-a8ce-d20f11f9220b 2165630 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252d7e7 0xc00252d7e8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.059: INFO: Pod "webserver-deployment-6676bcd6d4-pphfg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pphfg webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-pphfg 9eaadeb5-81ec-4030-90b1-a237afa32ecb 2165567 0 2020-05-07 00:17:39 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252d927 0xc00252d928}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-07 00:17:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.059: INFO: Pod "webserver-deployment-6676bcd6d4-wvvv8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wvvv8 webserver-deployment-6676bcd6d4- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-6676bcd6d4-wvvv8 69d9ce00-e286-4dcf-be91-df1ea6ddc173 2165649 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 defd6b79-bd52-4f7a-86df-94ee1b8539a8 0xc00252dad7 0xc00252dad8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"defd6b79-bd52-4f7a-86df-94ee1b8539a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-07 00:17:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.059: INFO: Pod "webserver-deployment-84855cf797-24fqv" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-24fqv webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-24fqv 4ae9abb4-63cc-41ac-aea1-087828a8226e 2165465 0 2020-05-07 00:17:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256c027 0xc00256c028}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.41,StartTime:2020-05-07 00:17:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:17:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e3a06694987e4599173d7cf65ea9db239061682b46bc0e5c5a15d2cdf3b8500a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.059: INFO: Pod "webserver-deployment-84855cf797-2fzrq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2fzrq webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-2fzrq 8cd04fcd-19f6-479f-8bd1-c9ca5fff2463 2165624 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256c1d7 0xc00256c1d8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.059: INFO: Pod "webserver-deployment-84855cf797-2s6mt" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2s6mt webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-2s6mt f1e7308f-a5f9-487f-8b92-a5cd5a77c210 2165498 0 2020-05-07 00:17:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256c307 0xc00256c308}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.136\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.136,StartTime:2020-05-07 00:17:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:17:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b5a103626a73c5895c9066ebe9fb91937c2fe630ece21691bcd4f8d4a9ee4307,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.060: INFO: Pod "webserver-deployment-84855cf797-496gq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-496gq webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-496gq 97a9dc8c-de62-4c96-9b2d-067f8dce19d3 2165599 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256c6a7 0xc00256c6a8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.060: INFO: Pod "webserver-deployment-84855cf797-57tgl" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-57tgl webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-57tgl 5f54d263-76b5-42fa-bdee-40a3e66bb8ff 2165631 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256c7d7 0xc00256c7d8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.060: INFO: Pod "webserver-deployment-84855cf797-597zw" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-597zw webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-597zw 09965fc5-ee6b-4da4-827e-ae983dad70a6 2165484 0 2020-05-07 00:17:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256c9b7 0xc00256c9b8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.42,StartTime:2020-05-07 00:17:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:17:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8fcb8b55133ac100271677237f2720fb6e5cd47069d85828b33b2bfae956f0be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.060: INFO: Pod "webserver-deployment-84855cf797-5d6ct" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5d6ct webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-5d6ct 824fea20-4374-455c-8c8f-d0eaf4198266 2165606 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256cd37 0xc00256cd38}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.060: INFO: Pod "webserver-deployment-84855cf797-5z5qq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5z5qq webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-5z5qq a808504a-6060-4674-972d-d3979b64aeb3 2165627 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256cf37 0xc00256cf38}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.061: INFO: Pod "webserver-deployment-84855cf797-8zkv7" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8zkv7 webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-8zkv7 1f61a130-d9b8-4f74-b532-f70f645fbee9 2165505 0 2020-05-07 00:17:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256d157 0xc00256d158}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.43,StartTime:2020-05-07 00:17:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:17:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aa4c0978fc6d72da362689d5c300607646816318a575802d210520c514a1fc09,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.061: INFO: Pod "webserver-deployment-84855cf797-9rhwb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-9rhwb webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-9rhwb aeba33d1-a398-4d12-bcad-e5d551183b57 2165609 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256d407 0xc00256d408}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.061: INFO: Pod "webserver-deployment-84855cf797-btj7b" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-btj7b webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-btj7b 0993cba8-6388-459c-b504-97eca01a4cf0 2165473 0 2020-05-07 00:17:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256d657 0xc00256d658}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.40,StartTime:2020-05-07 00:17:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:17:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0180c2d50b589d6f9605b52d34153ee2ed7e1d4bb0d5a0d98ceb49b851b33bae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.061: INFO: Pod "webserver-deployment-84855cf797-cjpsp" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cjpsp webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-cjpsp 1e02dac8-fcd2-467a-a13e-8a8eeae972e2 2165590 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256d927 0xc00256d928}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.062: INFO: Pod "webserver-deployment-84855cf797-dwgn7" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dwgn7 webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-dwgn7 dd2040bf-d534-476e-a0c9-e279ff765ccc 2165628 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256dac7 0xc00256dac8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.062: INFO: Pod "webserver-deployment-84855cf797-j8wj6" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-j8wj6 webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-j8wj6 60b0ba77-cdaa-4638-aeda-f3fa13de171b 2165451 0 2020-05-07 00:17:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256dd37 0xc00256dd38}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.134\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.134,StartTime:2020-05-07 00:17:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:17:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bcddbebadef20bf93b71f786936a95f9f721d07f98ff68345f00fb4d3802232f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.062: INFO: Pod "webserver-deployment-84855cf797-k7vbz" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-k7vbz webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-k7vbz 78cd05cd-2167-4deb-89df-c74d13d968b3 2165642 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc00256dee7 0xc00256dee8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-07 00:17:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.062: INFO: Pod "webserver-deployment-84855cf797-p4jld" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-p4jld webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-p4jld ec8497ba-dcc0-4023-85a3-667db65c3fd1 2165617 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc002d16077 0xc002d16078}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.062: INFO: Pod "webserver-deployment-84855cf797-tklr2" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tklr2 webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-tklr2 acd734ae-095c-4579-a1f8-c18fb6f83160 2165491 0 2020-05-07 00:17:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc002d161b7 0xc002d161b8}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.138\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.138,StartTime:2020-05-07 00:17:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:17:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c4090b9bdb7cf3f9240c01761f68376acc72737bbb19f67062aa9a6e976f8249,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.063: INFO: Pod "webserver-deployment-84855cf797-tvbjb" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tvbjb webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-tvbjb 3b0ce29d-51ef-4eda-bda9-b984963de93f 2165475 0 2020-05-07 00:17:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc002d16467 0xc002d16468}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:17:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.135\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.135,StartTime:2020-05-07 00:17:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:17:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3b4730ba3f7000346aae8edf0efb3bf4b0e36abd2f6ea6fe527695d203888824,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.063: INFO: Pod "webserver-deployment-84855cf797-w6xqr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-w6xqr webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-w6xqr 51dbb9b6-10d5-4e35-9993-06c93dffab7d 2165615 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc002d16a97 0xc002d16a98}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 00:17:43.063: INFO: Pod "webserver-deployment-84855cf797-wbnp4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wbnp4 webserver-deployment-84855cf797- deployment-2681 /api/v1/namespaces/deployment-2681/pods/webserver-deployment-84855cf797-wbnp4 65689769-ae2b-4926-ac1f-836ebacc5cb6 2165623 0 2020-05-07 00:17:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6e74c4f7-f26e-4534-b82b-cf7e0d564b84 0xc002d16d87 0xc002d16d88}] [] [{kube-controller-manager Update v1 2020-05-07 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e74c4f7-f26e-4534-b82b-cf7e0d564b84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-whhwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-whhwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-whhwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:17:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:17:43.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2681" for this suite. • [SLOW TEST:20.209 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":66,"skipped":978,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:17:43.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:17:43.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a" in namespace "downward-api-1771" to be "Succeeded or Failed" May 7 00:17:43.683: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.611533ms May 7 00:17:46.002: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.352590981s May 7 00:17:48.012: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362132538s May 7 00:17:50.153: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503641842s May 7 00:17:52.908: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.25889061s May 7 00:17:55.851: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.201465854s May 7 00:17:58.464: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.814389779s May 7 00:18:00.793: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.143845684s May 7 00:18:03.640: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.990952131s May 7 00:18:06.377: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.727864313s May 7 00:18:08.792: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.142990848s May 7 00:18:11.194: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.544790913s STEP: Saw pod success May 7 00:18:11.194: INFO: Pod "downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a" satisfied condition "Succeeded or Failed" May 7 00:18:11.229: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a container client-container: STEP: delete the pod May 7 00:18:12.336: INFO: Waiting for pod downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a to disappear May 7 00:18:12.385: INFO: Pod downwardapi-volume-f98a27ae-ca7f-4095-a6b4-c708a195805a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:18:12.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1771" for this suite. • [SLOW TEST:29.546 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":67,"skipped":985,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:18:12.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:18:13.927: INFO: Creating ReplicaSet my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f May 7 00:18:14.065: INFO: Pod name my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f: Found 0 pods out of 1 May 7 00:18:19.087: INFO: Pod name my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f: Found 1 pods out of 1 May 7 00:18:19.087: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f" is running May 7 00:18:23.452: INFO: Pod "my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f-7d48d" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:18:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:18:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:18:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:18:14 +0000 UTC Reason: Message:}]) May 7 00:18:23.453: INFO: Trying to dial the pod May 7 00:18:28.464: INFO: Controller my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f: Got expected result from replica 1 [my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f-7d48d]: "my-hostname-basic-0845fb93-9bbb-446f-b626-fa0de2b57a8f-7d48d", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:18:28.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9841" for this suite. • [SLOW TEST:15.644 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":68,"skipped":994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:18:28.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-e03c57fd-3d9c-4ec6-a516-9ad46fe827f6 STEP: Creating a pod to test consume configMaps May 7 00:18:29.420: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5d8069bd-67ca-43cf-bd34-242d5da8df2c" in namespace "projected-2450" to be "Succeeded or Failed" May 7 00:18:29.556: INFO: Pod "pod-projected-configmaps-5d8069bd-67ca-43cf-bd34-242d5da8df2c": Phase="Pending", Reason="", readiness=false. Elapsed: 135.696704ms May 7 00:18:31.561: INFO: Pod "pod-projected-configmaps-5d8069bd-67ca-43cf-bd34-242d5da8df2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141236646s May 7 00:18:33.731: INFO: Pod "pod-projected-configmaps-5d8069bd-67ca-43cf-bd34-242d5da8df2c": Phase="Running", Reason="", readiness=true. Elapsed: 4.311367522s May 7 00:18:35.764: INFO: Pod "pod-projected-configmaps-5d8069bd-67ca-43cf-bd34-242d5da8df2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.343731292s STEP: Saw pod success May 7 00:18:35.764: INFO: Pod "pod-projected-configmaps-5d8069bd-67ca-43cf-bd34-242d5da8df2c" satisfied condition "Succeeded or Failed" May 7 00:18:35.767: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5d8069bd-67ca-43cf-bd34-242d5da8df2c container projected-configmap-volume-test: STEP: delete the pod May 7 00:18:35.835: INFO: Waiting for pod pod-projected-configmaps-5d8069bd-67ca-43cf-bd34-242d5da8df2c to disappear May 7 00:18:35.918: INFO: Pod pod-projected-configmaps-5d8069bd-67ca-43cf-bd34-242d5da8df2c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:18:35.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2450" for this suite. • [SLOW TEST:7.480 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":1082,"failed":0} SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:18:35.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 7 00:18:40.635: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9370 pod-service-account-234a9a22-80ec-4fca-b1f1-a26db26a7eeb -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 7 00:18:40.872: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9370 pod-service-account-234a9a22-80ec-4fca-b1f1-a26db26a7eeb -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 7 00:18:41.069: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9370 pod-service-account-234a9a22-80ec-4fca-b1f1-a26db26a7eeb -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:18:41.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9370" for this suite. • [SLOW TEST:5.320 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":70,"skipped":1090,"failed":0} SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:18:41.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:18:41.344: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:18:45.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4941" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":71,"skipped":1092,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:18:45.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7112 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7112 I0507 00:18:46.430264 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7112, replica count: 2 I0507 00:18:49.480736 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:18:52.481005 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 00:18:52.481: INFO: Creating new exec pod May 7 00:18:57.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7112 execpodzh6qj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 7 00:18:57.893: INFO: stderr: "I0507 00:18:57.811722 1000 log.go:172] (0xc000591d90) (0xc00023a820) Create stream\nI0507 00:18:57.811765 1000 log.go:172] (0xc000591d90) (0xc00023a820) Stream added, broadcasting: 1\nI0507 00:18:57.814143 1000 log.go:172] (0xc000591d90) Reply frame received for 1\nI0507 00:18:57.814179 1000 log.go:172] (0xc000591d90) (0xc000690a00) Create stream\nI0507 00:18:57.814188 1000 log.go:172] (0xc000591d90) (0xc000690a00) Stream added, broadcasting: 3\nI0507 00:18:57.815062 1000 log.go:172] (0xc000591d90) Reply frame received for 3\nI0507 00:18:57.815109 1000 log.go:172] (0xc000591d90) (0xc00023b4a0) Create stream\nI0507 00:18:57.815127 1000 log.go:172] (0xc000591d90) (0xc00023b4a0) Stream added, broadcasting: 5\nI0507 00:18:57.815906 1000 log.go:172] (0xc000591d90) Reply frame received for 5\nI0507 00:18:57.881326 1000 log.go:172] (0xc000591d90) Data frame received for 5\nI0507 00:18:57.881474 1000 log.go:172] (0xc00023b4a0) (5) Data frame handling\nI0507 00:18:57.881500 1000 log.go:172] (0xc00023b4a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0507 00:18:57.883489 1000 log.go:172] (0xc000591d90) Data frame received for 5\nI0507 00:18:57.883527 1000 log.go:172] (0xc00023b4a0) (5) Data frame handling\nI0507 00:18:57.883552 1000 log.go:172] (0xc00023b4a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0507 00:18:57.884161 1000 log.go:172] (0xc000591d90) Data frame received for 5\nI0507 00:18:57.884205 1000 log.go:172] (0xc00023b4a0) (5) Data frame handling\nI0507 00:18:57.884242 1000 log.go:172] (0xc000591d90) Data frame received for 3\nI0507 00:18:57.884258 1000 log.go:172] (0xc000690a00) (3) Data frame handling\nI0507 00:18:57.886211 1000 log.go:172] (0xc000591d90) Data frame received for 1\nI0507 00:18:57.886244 1000 log.go:172] (0xc00023a820) (1) Data frame handling\nI0507 00:18:57.886269 1000 log.go:172] (0xc00023a820) (1) Data frame sent\nI0507 00:18:57.886299 1000 log.go:172] (0xc000591d90) (0xc00023a820) Stream removed, broadcasting: 1\nI0507 00:18:57.886342 1000 log.go:172] (0xc000591d90) Go away received\nI0507 00:18:57.886665 1000 log.go:172] (0xc000591d90) (0xc00023a820) Stream removed, broadcasting: 1\nI0507 00:18:57.886686 1000 log.go:172] (0xc000591d90) (0xc000690a00) Stream removed, broadcasting: 3\nI0507 00:18:57.886701 1000 log.go:172] (0xc000591d90) (0xc00023b4a0) Stream removed, broadcasting: 5\n" May 7 00:18:57.893: INFO: stdout: "" May 7 00:18:57.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7112 execpodzh6qj -- /bin/sh -x -c nc -zv -t -w 2 10.102.198.161 80' May 7 00:18:58.116: INFO: stderr: "I0507 00:18:58.038816 1022 log.go:172] (0xc00003b080) (0xc00013b900) Create stream\nI0507 00:18:58.038870 1022 log.go:172] (0xc00003b080) (0xc00013b900) Stream added, broadcasting: 1\nI0507 00:18:58.040850 1022 log.go:172] (0xc00003b080) Reply frame received for 1\nI0507 00:18:58.040895 1022 log.go:172] (0xc00003b080) (0xc0005dc0a0) Create stream\nI0507 00:18:58.040905 1022 log.go:172] (0xc00003b080) (0xc0005dc0a0) Stream added, broadcasting: 3\nI0507 00:18:58.041912 1022 log.go:172] (0xc00003b080) Reply frame received for 3\nI0507 00:18:58.041962 1022 log.go:172] (0xc00003b080) (0xc0005dd040) Create stream\nI0507 00:18:58.041982 1022 log.go:172] (0xc00003b080) (0xc0005dd040) Stream added, broadcasting: 5\nI0507 00:18:58.042622 1022 log.go:172] (0xc00003b080) Reply frame received for 5\nI0507 00:18:58.110407 1022 log.go:172] (0xc00003b080) Data frame received for 3\nI0507 00:18:58.110450 1022 log.go:172] (0xc0005dc0a0) (3) Data frame handling\nI0507 00:18:58.110481 1022 log.go:172] (0xc00003b080) Data frame received for 5\nI0507 00:18:58.110507 1022 log.go:172] (0xc0005dd040) (5) Data frame handling\nI0507 00:18:58.110532 1022 log.go:172] (0xc0005dd040) (5) Data frame sent\nI0507 00:18:58.110545 1022 log.go:172] (0xc00003b080) Data frame received for 5\nI0507 00:18:58.110554 1022 log.go:172] (0xc0005dd040) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.198.161 80\nConnection to 10.102.198.161 80 port [tcp/http] succeeded!\nI0507 00:18:58.112058 1022 log.go:172] (0xc00003b080) Data frame received for 1\nI0507 00:18:58.112082 1022 log.go:172] (0xc00013b900) (1) Data frame handling\nI0507 00:18:58.112092 1022 log.go:172] (0xc00013b900) (1) Data frame sent\nI0507 00:18:58.112103 1022 log.go:172] (0xc00003b080) (0xc00013b900) Stream removed, broadcasting: 1\nI0507 00:18:58.112346 1022 log.go:172] (0xc00003b080) Go away received\nI0507 00:18:58.112443 1022 log.go:172] (0xc00003b080) (0xc00013b900) Stream removed, broadcasting: 1\nI0507 00:18:58.112473 1022 log.go:172] (0xc00003b080) (0xc0005dc0a0) Stream removed, broadcasting: 3\nI0507 00:18:58.112489 1022 log.go:172] (0xc00003b080) (0xc0005dd040) Stream removed, broadcasting: 5\n" May 7 00:18:58.117: INFO: stdout: "" May 7 00:18:58.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7112 execpodzh6qj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31372' May 7 00:18:58.347: INFO: stderr: "I0507 00:18:58.258235 1043 log.go:172] (0xc0009ef6b0) (0xc00076c8c0) Create stream\nI0507 00:18:58.258294 1043 log.go:172] (0xc0009ef6b0) (0xc00076c8c0) Stream added, broadcasting: 1\nI0507 00:18:58.262653 1043 log.go:172] (0xc0009ef6b0) Reply frame received for 1\nI0507 00:18:58.262702 1043 log.go:172] (0xc0009ef6b0) (0xc00077c1e0) Create stream\nI0507 00:18:58.262714 1043 log.go:172] (0xc0009ef6b0) (0xc00077c1e0) Stream added, broadcasting: 3\nI0507 00:18:58.265713 1043 log.go:172] (0xc0009ef6b0) Reply frame received for 3\nI0507 00:18:58.265741 1043 log.go:172] (0xc0009ef6b0) (0xc00022dd60) Create stream\nI0507 00:18:58.265750 1043 log.go:172] (0xc0009ef6b0) (0xc00022dd60) Stream added, broadcasting: 5\nI0507 00:18:58.267595 1043 log.go:172] (0xc0009ef6b0) Reply frame received for 5\nI0507 00:18:58.340930 1043 log.go:172] (0xc0009ef6b0) Data frame received for 5\nI0507 00:18:58.340960 1043 log.go:172] (0xc00022dd60) (5) Data frame handling\nI0507 00:18:58.340980 1043 log.go:172] (0xc00022dd60) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31372\nConnection to 172.17.0.13 31372 port [tcp/31372] succeeded!\nI0507 00:18:58.341481 1043 log.go:172] (0xc0009ef6b0) Data frame received for 5\nI0507 00:18:58.341510 1043 log.go:172] (0xc00022dd60) (5) Data frame handling\nI0507 00:18:58.341552 1043 log.go:172] (0xc0009ef6b0) Data frame received for 3\nI0507 00:18:58.341603 1043 log.go:172] (0xc00077c1e0) (3) Data frame handling\nI0507 00:18:58.342921 1043 log.go:172] (0xc0009ef6b0) Data frame received for 1\nI0507 00:18:58.342941 1043 log.go:172] (0xc00076c8c0) (1) Data frame handling\nI0507 00:18:58.342950 1043 log.go:172] (0xc00076c8c0) (1) Data frame sent\nI0507 00:18:58.342960 1043 log.go:172] (0xc0009ef6b0) (0xc00076c8c0) Stream removed, broadcasting: 1\nI0507 00:18:58.342972 1043 log.go:172] (0xc0009ef6b0) Go away received\nI0507 00:18:58.343490 1043 log.go:172] (0xc0009ef6b0) (0xc00076c8c0) Stream removed, broadcasting: 1\nI0507 00:18:58.343510 1043 log.go:172] (0xc0009ef6b0) (0xc00077c1e0) Stream removed, broadcasting: 3\nI0507 00:18:58.343519 1043 log.go:172] (0xc0009ef6b0) (0xc00022dd60) Stream removed, broadcasting: 5\n" May 7 00:18:58.348: INFO: stdout: "" May 7 00:18:58.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7112 execpodzh6qj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31372' May 7 00:18:58.570: INFO: stderr: "I0507 00:18:58.480067 1062 log.go:172] (0xc000c260b0) (0xc00054a280) Create stream\nI0507 00:18:58.480140 1062 log.go:172] (0xc000c260b0) (0xc00054a280) Stream added, broadcasting: 1\nI0507 00:18:58.483256 1062 log.go:172] (0xc000c260b0) Reply frame received for 1\nI0507 00:18:58.483324 1062 log.go:172] (0xc000c260b0) (0xc0004fcdc0) Create stream\nI0507 00:18:58.483353 1062 log.go:172] (0xc000c260b0) (0xc0004fcdc0) Stream added, broadcasting: 3\nI0507 00:18:58.484434 1062 log.go:172] (0xc000c260b0) Reply frame received for 3\nI0507 00:18:58.484461 1062 log.go:172] (0xc000c260b0) (0xc000309360) Create stream\nI0507 00:18:58.484469 1062 log.go:172] (0xc000c260b0) (0xc000309360) Stream added, broadcasting: 5\nI0507 00:18:58.485613 1062 log.go:172] (0xc000c260b0) Reply frame received for 5\nI0507 00:18:58.563767 1062 log.go:172] (0xc000c260b0) Data frame received for 5\nI0507 00:18:58.563799 1062 log.go:172] (0xc000309360) (5) Data frame handling\nI0507 00:18:58.563807 1062 log.go:172] (0xc000309360) (5) Data frame sent\nI0507 00:18:58.563813 1062 log.go:172] (0xc000c260b0) Data frame received for 5\nI0507 00:18:58.563818 1062 log.go:172] (0xc000309360) (5) Data frame handling\nI0507 00:18:58.563826 1062 log.go:172] (0xc000c260b0) Data frame received for 3\n+ nc -zv -t -w 2 172.17.0.12 31372\nConnection to 172.17.0.12 31372 port [tcp/31372] succeeded!\nI0507 00:18:58.563833 1062 log.go:172] (0xc0004fcdc0) (3) Data frame handling\nI0507 00:18:58.565923 1062 log.go:172] (0xc000c260b0) Data frame received for 1\nI0507 00:18:58.565958 1062 log.go:172] (0xc00054a280) (1) Data frame handling\nI0507 00:18:58.565992 1062 log.go:172] (0xc00054a280) (1) Data frame sent\nI0507 00:18:58.566016 1062 log.go:172] (0xc000c260b0) (0xc00054a280) Stream removed, broadcasting: 1\nI0507 00:18:58.566035 1062 log.go:172] (0xc000c260b0) Go away received\nI0507 00:18:58.566416 1062 log.go:172] (0xc000c260b0) (0xc00054a280) Stream removed, broadcasting: 1\nI0507 00:18:58.566446 1062 log.go:172] (0xc000c260b0) (0xc0004fcdc0) Stream removed, broadcasting: 3\nI0507 00:18:58.566465 1062 log.go:172] (0xc000c260b0) (0xc000309360) Stream removed, broadcasting: 5\n" May 7 00:18:58.570: INFO: stdout: "" May 7 00:18:58.570: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:18:58.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7112" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:13.142 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":72,"skipped":1099,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:18:58.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 7 00:20:59.267: INFO: Successfully updated pod "var-expansion-9244883b-648d-4ae5-b93b-595a6b84f032" STEP: waiting for pod running STEP: deleting the pod gracefully May 7 00:21:03.340: INFO: Deleting pod "var-expansion-9244883b-648d-4ae5-b93b-595a6b84f032" in namespace "var-expansion-7999" May 7 00:21:03.345: INFO: Wait up to 5m0s for pod "var-expansion-9244883b-648d-4ae5-b93b-595a6b84f032" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:21:45.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7999" for this suite. • [SLOW TEST:166.745 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":73,"skipped":1102,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:21:45.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:21:46.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0" in namespace "downward-api-1160" to be "Succeeded or Failed" May 7 00:21:46.064: INFO: Pod "downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 25.099413ms May 7 00:21:48.070: INFO: Pod "downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030936574s May 7 00:21:50.148: INFO: Pod "downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10875507s May 7 00:21:52.278: INFO: Pod "downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0": Phase="Running", Reason="", readiness=true. Elapsed: 6.238467481s May 7 00:21:54.282: INFO: Pod "downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.243168462s STEP: Saw pod success May 7 00:21:54.282: INFO: Pod "downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0" satisfied condition "Succeeded or Failed" May 7 00:21:54.286: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0 container client-container: STEP: delete the pod May 7 00:21:54.336: INFO: Waiting for pod downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0 to disappear May 7 00:21:54.375: INFO: Pod downwardapi-volume-726a013e-1ed5-4de9-814e-8b836f8df1a0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:21:54.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1160" for this suite. • [SLOW TEST:8.978 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":74,"skipped":1103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:21:54.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:21:54.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7367" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":75,"skipped":1126,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:21:54.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 7 00:21:54.881: INFO: Waiting up to 5m0s for pod "downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554" in namespace "downward-api-9884" to be "Succeeded or Failed" May 7 00:21:55.029: INFO: Pod "downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554": Phase="Pending", Reason="", readiness=false. Elapsed: 147.628378ms May 7 00:21:57.077: INFO: Pod "downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195336082s May 7 00:21:59.081: INFO: Pod "downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199788252s May 7 00:22:01.190: INFO: Pod "downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554": Phase="Running", Reason="", readiness=true. Elapsed: 6.308943328s May 7 00:22:03.214: INFO: Pod "downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.332412515s STEP: Saw pod success May 7 00:22:03.214: INFO: Pod "downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554" satisfied condition "Succeeded or Failed" May 7 00:22:03.217: INFO: Trying to get logs from node latest-worker pod downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554 container dapi-container: STEP: delete the pod May 7 00:22:03.313: INFO: Waiting for pod downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554 to disappear May 7 00:22:03.399: INFO: Pod downward-api-e1749fee-c31e-46b4-9870-ffe5551fa554 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:22:03.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9884" for this suite. • [SLOW TEST:8.865 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":76,"skipped":1127,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:22:03.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-8b3764ee-12f1-488c-8452-be148e55816c STEP: Creating a pod to test consume secrets May 7 00:22:03.594: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-83b0bfb4-fcfa-40ef-af94-21f777f3161b" in namespace "projected-8021" to be "Succeeded or Failed" May 7 00:22:03.610: INFO: Pod "pod-projected-secrets-83b0bfb4-fcfa-40ef-af94-21f777f3161b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108305ms May 7 00:22:05.622: INFO: Pod "pod-projected-secrets-83b0bfb4-fcfa-40ef-af94-21f777f3161b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02810784s May 7 00:22:07.626: INFO: Pod "pod-projected-secrets-83b0bfb4-fcfa-40ef-af94-21f777f3161b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032138073s STEP: Saw pod success May 7 00:22:07.626: INFO: Pod "pod-projected-secrets-83b0bfb4-fcfa-40ef-af94-21f777f3161b" satisfied condition "Succeeded or Failed" May 7 00:22:07.629: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-83b0bfb4-fcfa-40ef-af94-21f777f3161b container projected-secret-volume-test: STEP: delete the pod May 7 00:22:07.826: INFO: Waiting for pod pod-projected-secrets-83b0bfb4-fcfa-40ef-af94-21f777f3161b to disappear May 7 00:22:07.861: INFO: Pod pod-projected-secrets-83b0bfb4-fcfa-40ef-af94-21f777f3161b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:22:07.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8021" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":77,"skipped":1134,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:22:07.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 7 00:22:08.016: INFO: Waiting up to 5m0s for pod "pod-e84b5c5e-f4b7-458b-b1c2-d8dc12d06586" in namespace "emptydir-9164" to be "Succeeded or Failed" May 7 00:22:08.020: INFO: Pod "pod-e84b5c5e-f4b7-458b-b1c2-d8dc12d06586": Phase="Pending", Reason="", readiness=false. Elapsed: 3.255637ms May 7 00:22:10.256: INFO: Pod "pod-e84b5c5e-f4b7-458b-b1c2-d8dc12d06586": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239952004s May 7 00:22:12.260: INFO: Pod "pod-e84b5c5e-f4b7-458b-b1c2-d8dc12d06586": Phase="Running", Reason="", readiness=true. Elapsed: 4.24396559s May 7 00:22:14.266: INFO: Pod "pod-e84b5c5e-f4b7-458b-b1c2-d8dc12d06586": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249356217s STEP: Saw pod success May 7 00:22:14.266: INFO: Pod "pod-e84b5c5e-f4b7-458b-b1c2-d8dc12d06586" satisfied condition "Succeeded or Failed" May 7 00:22:14.268: INFO: Trying to get logs from node latest-worker pod pod-e84b5c5e-f4b7-458b-b1c2-d8dc12d06586 container test-container: STEP: delete the pod May 7 00:22:14.335: INFO: Waiting for pod pod-e84b5c5e-f4b7-458b-b1c2-d8dc12d06586 to disappear May 7 00:22:14.507: INFO: Pod pod-e84b5c5e-f4b7-458b-b1c2-d8dc12d06586 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:22:14.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9164" for this suite. • [SLOW TEST:6.688 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":78,"skipped":1139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:22:14.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 7 00:22:14.742: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:22:23.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7946" for this suite. • [SLOW TEST:8.833 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1165,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:22:23.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:24:23.498: INFO: Deleting pod "var-expansion-da6ddad4-4b33-49ad-874c-96f80a88e5bb" in namespace "var-expansion-4855" May 7 00:24:23.503: INFO: Wait up to 5m0s for pod "var-expansion-da6ddad4-4b33-49ad-874c-96f80a88e5bb" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:24:25.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4855" for this suite. • [SLOW TEST:122.199 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":80,"skipped":1177,"failed":0} [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:24:25.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 7 00:24:25.685: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7545' May 7 00:24:26.034: INFO: stderr: "" May 7 00:24:26.034: INFO: stdout: "pod/pause created\n" May 7 00:24:26.034: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 7 00:24:26.034: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7545" to be "running and ready" May 7 00:24:26.049: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.635419ms May 7 00:24:28.054: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020164666s May 7 00:24:30.059: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.025031528s May 7 00:24:30.059: INFO: Pod "pause" satisfied condition "running and ready" May 7 00:24:30.059: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 7 00:24:30.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7545' May 7 00:24:30.160: INFO: stderr: "" May 7 00:24:30.160: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 7 00:24:30.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7545' May 7 00:24:30.255: INFO: stderr: "" May 7 00:24:30.255: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 7 00:24:30.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7545' May 7 00:24:30.359: INFO: stderr: "" May 7 00:24:30.359: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 7 00:24:30.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7545' May 7 00:24:30.454: INFO: stderr: "" May 7 00:24:30.454: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 7 00:24:30.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7545' May 7 00:24:30.563: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 00:24:30.563: INFO: stdout: "pod \"pause\" force deleted\n" May 7 00:24:30.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7545' May 7 00:24:30.672: INFO: stderr: "No resources found in kubectl-7545 namespace.\n" May 7 00:24:30.672: INFO: stdout: "" May 7 00:24:30.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7545 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 00:24:30.876: INFO: stderr: "" May 7 00:24:30.876: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:24:30.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7545" for this suite. • [SLOW TEST:5.283 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":81,"skipped":1177,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:24:30.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 7 00:24:31.029: INFO: >>> kubeConfig: /root/.kube/config May 7 00:24:34.005: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:24:44.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4805" for this suite. • [SLOW TEST:13.852 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":82,"skipped":1182,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:24:44.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 7 00:24:53.348: INFO: Successfully updated pod "adopt-release-fn4nh" STEP: Checking that the Job readopts the Pod May 7 00:24:53.348: INFO: Waiting up to 15m0s for pod "adopt-release-fn4nh" in namespace "job-4316" to be "adopted" May 7 00:24:53.360: INFO: Pod "adopt-release-fn4nh": Phase="Running", Reason="", readiness=true. Elapsed: 12.452554ms May 7 00:24:55.365: INFO: Pod "adopt-release-fn4nh": Phase="Running", Reason="", readiness=true. Elapsed: 2.016937069s May 7 00:24:55.365: INFO: Pod "adopt-release-fn4nh" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 7 00:24:55.875: INFO: Successfully updated pod "adopt-release-fn4nh" STEP: Checking that the Job releases the Pod May 7 00:24:55.875: INFO: Waiting up to 15m0s for pod "adopt-release-fn4nh" in namespace "job-4316" to be "released" May 7 00:24:55.914: INFO: Pod "adopt-release-fn4nh": Phase="Running", Reason="", readiness=true. Elapsed: 39.512082ms May 7 00:24:58.036: INFO: Pod "adopt-release-fn4nh": Phase="Running", Reason="", readiness=true. Elapsed: 2.160997484s May 7 00:24:58.036: INFO: Pod "adopt-release-fn4nh" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:24:58.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4316" for this suite. • [SLOW TEST:13.310 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":83,"skipped":1195,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:24:58.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:24:58.356: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:24:59.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6319" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":84,"skipped":1208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:24:59.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 7 00:24:59.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3073' May 7 00:25:00.079: INFO: stderr: "" May 7 00:25:00.079: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 7 00:25:05.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3073 -o json' May 7 00:25:05.229: INFO: stderr: "" May 7 00:25:05.229: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-07T00:25:00Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-07T00:25:00Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.160\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-07T00:25:02Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3073\",\n \"resourceVersion\": \"2167740\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3073/pods/e2e-test-httpd-pod\",\n \"uid\": \"42626c78-3228-42dc-ace8-6d429bbe3a3e\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-h8nn4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-h8nn4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-h8nn4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-07T00:25:00Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-07T00:25:02Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-07T00:25:02Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-07T00:25:00Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://78135717bf8078fb197fbeac16e17b0ad097e8133a7781436c91d75e4f53e3e4\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-07T00:25:02Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.160\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.160\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-07T00:25:00Z\"\n }\n}\n" STEP: replace the image in the pod May 7 00:25:05.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3073' May 7 00:25:05.539: INFO: stderr: "" May 7 00:25:05.539: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 7 00:25:05.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3073' May 7 00:25:15.253: INFO: stderr: "" May 7 00:25:15.253: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:25:15.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3073" for this suite. • [SLOW TEST:15.361 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":85,"skipped":1232,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:25:15.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:25:15.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aba9c97d-91b7-4b9b-918b-71954a3aeb17" in namespace "downward-api-6560" to be "Succeeded or Failed" May 7 00:25:15.471: INFO: Pod "downwardapi-volume-aba9c97d-91b7-4b9b-918b-71954a3aeb17": Phase="Pending", Reason="", readiness=false. Elapsed: 3.636076ms May 7 00:25:17.647: INFO: Pod "downwardapi-volume-aba9c97d-91b7-4b9b-918b-71954a3aeb17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179558561s May 7 00:25:19.652: INFO: Pod "downwardapi-volume-aba9c97d-91b7-4b9b-918b-71954a3aeb17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.184487399s STEP: Saw pod success May 7 00:25:19.652: INFO: Pod "downwardapi-volume-aba9c97d-91b7-4b9b-918b-71954a3aeb17" satisfied condition "Succeeded or Failed" May 7 00:25:19.655: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-aba9c97d-91b7-4b9b-918b-71954a3aeb17 container client-container: STEP: delete the pod May 7 00:25:19.702: INFO: Waiting for pod downwardapi-volume-aba9c97d-91b7-4b9b-918b-71954a3aeb17 to disappear May 7 00:25:19.713: INFO: Pod downwardapi-volume-aba9c97d-91b7-4b9b-918b-71954a3aeb17 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:25:19.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6560" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1245,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:25:19.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 7 00:25:19.897: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5826 /api/v1/namespaces/watch-5826/configmaps/e2e-watch-test-label-changed 89c36390-3302-4f21-bba2-01c6b5e09325 2167847 0 2020-05-07 00:25:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-07 00:25:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 7 00:25:19.897: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5826 /api/v1/namespaces/watch-5826/configmaps/e2e-watch-test-label-changed 89c36390-3302-4f21-bba2-01c6b5e09325 2167848 0 2020-05-07 00:25:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-07 00:25:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 7 00:25:19.897: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5826 /api/v1/namespaces/watch-5826/configmaps/e2e-watch-test-label-changed 89c36390-3302-4f21-bba2-01c6b5e09325 2167849 0 2020-05-07 00:25:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-07 00:25:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 7 00:25:29.982: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5826 /api/v1/namespaces/watch-5826/configmaps/e2e-watch-test-label-changed 89c36390-3302-4f21-bba2-01c6b5e09325 2167900 0 2020-05-07 00:25:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-07 00:25:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 7 00:25:29.982: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5826 /api/v1/namespaces/watch-5826/configmaps/e2e-watch-test-label-changed 89c36390-3302-4f21-bba2-01c6b5e09325 2167901 0 2020-05-07 00:25:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-07 00:25:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 7 00:25:29.983: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5826 /api/v1/namespaces/watch-5826/configmaps/e2e-watch-test-label-changed 89c36390-3302-4f21-bba2-01c6b5e09325 2167902 0 2020-05-07 00:25:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-07 00:25:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:25:29.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5826" for this suite. • [SLOW TEST:10.281 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":87,"skipped":1265,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:25:30.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 7 00:25:30.078: INFO: Waiting up to 5m0s for pod "downward-api-f7aefa98-11ab-4c5e-9279-fe8d8e1bd1e2" in namespace "downward-api-7786" to be "Succeeded or Failed" May 7 00:25:30.081: INFO: Pod "downward-api-f7aefa98-11ab-4c5e-9279-fe8d8e1bd1e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.847262ms May 7 00:25:32.174: INFO: Pod "downward-api-f7aefa98-11ab-4c5e-9279-fe8d8e1bd1e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095785323s May 7 00:25:34.241: INFO: Pod "downward-api-f7aefa98-11ab-4c5e-9279-fe8d8e1bd1e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161976866s STEP: Saw pod success May 7 00:25:34.241: INFO: Pod "downward-api-f7aefa98-11ab-4c5e-9279-fe8d8e1bd1e2" satisfied condition "Succeeded or Failed" May 7 00:25:34.244: INFO: Trying to get logs from node latest-worker pod downward-api-f7aefa98-11ab-4c5e-9279-fe8d8e1bd1e2 container dapi-container: STEP: delete the pod May 7 00:25:34.356: INFO: Waiting for pod downward-api-f7aefa98-11ab-4c5e-9279-fe8d8e1bd1e2 to disappear May 7 00:25:34.398: INFO: Pod downward-api-f7aefa98-11ab-4c5e-9279-fe8d8e1bd1e2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:25:34.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7786" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:25:34.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-e706919e-5d82-4b49-832f-6717af05b473 STEP: Creating a pod to test consume configMaps May 7 00:25:34.736: INFO: Waiting up to 5m0s for pod "pod-configmaps-df477daa-087b-45cf-b667-b82e9e851158" in namespace "configmap-4525" to be "Succeeded or Failed" May 7 00:25:34.742: INFO: Pod "pod-configmaps-df477daa-087b-45cf-b667-b82e9e851158": Phase="Pending", Reason="", readiness=false. Elapsed: 5.267199ms May 7 00:25:36.746: INFO: Pod "pod-configmaps-df477daa-087b-45cf-b667-b82e9e851158": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009528658s May 7 00:25:38.750: INFO: Pod "pod-configmaps-df477daa-087b-45cf-b667-b82e9e851158": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013732719s STEP: Saw pod success May 7 00:25:38.750: INFO: Pod "pod-configmaps-df477daa-087b-45cf-b667-b82e9e851158" satisfied condition "Succeeded or Failed" May 7 00:25:38.753: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-df477daa-087b-45cf-b667-b82e9e851158 container configmap-volume-test: STEP: delete the pod May 7 00:25:38.816: INFO: Waiting for pod pod-configmaps-df477daa-087b-45cf-b667-b82e9e851158 to disappear May 7 00:25:38.822: INFO: Pod pod-configmaps-df477daa-087b-45cf-b667-b82e9e851158 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:25:38.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4525" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:25:38.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4496 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4496 STEP: creating replication controller externalsvc in namespace services-4496 I0507 00:25:39.128764 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4496, replica count: 2 I0507 00:25:42.179153 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:25:45.179373 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 7 00:25:45.261: INFO: Creating new exec pod May 7 00:25:49.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4496 execpodw5g5z -- /bin/sh -x -c nslookup nodeport-service' May 7 00:25:49.564: INFO: stderr: "I0507 00:25:49.435888 1325 log.go:172] (0xc000a8d290) (0xc000815cc0) Create stream\nI0507 00:25:49.435946 1325 log.go:172] (0xc000a8d290) (0xc000815cc0) Stream added, broadcasting: 1\nI0507 00:25:49.440711 1325 log.go:172] (0xc000a8d290) Reply frame received for 1\nI0507 00:25:49.440752 1325 log.go:172] (0xc000a8d290) (0xc00059a320) Create stream\nI0507 00:25:49.440765 1325 log.go:172] (0xc000a8d290) (0xc00059a320) Stream added, broadcasting: 3\nI0507 00:25:49.442182 1325 log.go:172] (0xc000a8d290) Reply frame received for 3\nI0507 00:25:49.442240 1325 log.go:172] (0xc000a8d290) (0xc000560e60) Create stream\nI0507 00:25:49.442257 1325 log.go:172] (0xc000a8d290) (0xc000560e60) Stream added, broadcasting: 5\nI0507 00:25:49.443074 1325 log.go:172] (0xc000a8d290) Reply frame received for 5\nI0507 00:25:49.543551 1325 log.go:172] (0xc000a8d290) Data frame received for 5\nI0507 00:25:49.543578 1325 log.go:172] (0xc000560e60) (5) Data frame handling\nI0507 00:25:49.543594 1325 log.go:172] (0xc000560e60) (5) Data frame sent\n+ nslookup nodeport-service\nI0507 00:25:49.553563 1325 log.go:172] (0xc000a8d290) Data frame received for 3\nI0507 00:25:49.553594 1325 log.go:172] (0xc00059a320) (3) Data frame handling\nI0507 00:25:49.553611 1325 log.go:172] (0xc00059a320) (3) Data frame sent\nI0507 00:25:49.554810 1325 log.go:172] (0xc000a8d290) Data frame received for 3\nI0507 00:25:49.554831 1325 log.go:172] (0xc00059a320) (3) Data frame handling\nI0507 00:25:49.554847 1325 log.go:172] (0xc00059a320) (3) Data frame sent\nI0507 00:25:49.555502 1325 log.go:172] (0xc000a8d290) Data frame received for 3\nI0507 00:25:49.555543 1325 log.go:172] (0xc00059a320) (3) Data frame handling\nI0507 00:25:49.555567 1325 log.go:172] (0xc000a8d290) Data frame received for 5\nI0507 00:25:49.555581 1325 log.go:172] (0xc000560e60) (5) Data frame handling\nI0507 00:25:49.557659 1325 log.go:172] (0xc000a8d290) Data frame received for 1\nI0507 00:25:49.557685 1325 log.go:172] (0xc000815cc0) (1) Data frame handling\nI0507 00:25:49.557706 1325 log.go:172] (0xc000815cc0) (1) Data frame sent\nI0507 00:25:49.557831 1325 log.go:172] (0xc000a8d290) (0xc000815cc0) Stream removed, broadcasting: 1\nI0507 00:25:49.558260 1325 log.go:172] (0xc000a8d290) Go away received\nI0507 00:25:49.558342 1325 log.go:172] (0xc000a8d290) (0xc000815cc0) Stream removed, broadcasting: 1\nI0507 00:25:49.558363 1325 log.go:172] (0xc000a8d290) (0xc00059a320) Stream removed, broadcasting: 3\nI0507 00:25:49.558379 1325 log.go:172] (0xc000a8d290) (0xc000560e60) Stream removed, broadcasting: 5\n" May 7 00:25:49.564: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4496.svc.cluster.local\tcanonical name = externalsvc.services-4496.svc.cluster.local.\nName:\texternalsvc.services-4496.svc.cluster.local\nAddress: 10.110.135.209\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4496, will wait for the garbage collector to delete the pods May 7 00:25:49.623: INFO: Deleting ReplicationController externalsvc took: 5.778571ms May 7 00:25:49.924: INFO: Terminating ReplicationController externalsvc pods took: 300.341657ms May 7 00:26:04.953: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:26:04.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4496" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.170 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":90,"skipped":1407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:26:05.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 7 00:26:14.628: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 00:26:14.663: INFO: Pod pod-with-prestop-http-hook still exists May 7 00:26:16.663: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 00:26:16.668: INFO: Pod pod-with-prestop-http-hook still exists May 7 00:26:18.663: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 00:26:18.669: INFO: Pod pod-with-prestop-http-hook still exists May 7 00:26:20.663: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 00:26:20.667: INFO: Pod pod-with-prestop-http-hook still exists May 7 00:26:22.663: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 00:26:22.668: INFO: Pod pod-with-prestop-http-hook still exists May 7 00:26:24.663: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 00:26:24.668: INFO: Pod pod-with-prestop-http-hook still exists May 7 00:26:26.663: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 00:26:26.668: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:26:26.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2628" for this suite. • [SLOW TEST:21.683 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1440,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:26:26.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-5136c7a9-2d22-4492-84e9-428b0adb4803 STEP: Creating a pod to test consume configMaps May 7 00:26:26.803: INFO: Waiting up to 5m0s for pod "pod-configmaps-36453e3e-7426-46c7-810a-a0c1c0d220e0" in namespace "configmap-5200" to be "Succeeded or Failed" May 7 00:26:26.840: INFO: Pod "pod-configmaps-36453e3e-7426-46c7-810a-a0c1c0d220e0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.203678ms May 7 00:26:31.111: INFO: Pod "pod-configmaps-36453e3e-7426-46c7-810a-a0c1c0d220e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308071288s May 7 00:26:33.180: INFO: Pod "pod-configmaps-36453e3e-7426-46c7-810a-a0c1c0d220e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377007288s May 7 00:26:35.184: INFO: Pod "pod-configmaps-36453e3e-7426-46c7-810a-a0c1c0d220e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.380503155s STEP: Saw pod success May 7 00:26:35.184: INFO: Pod "pod-configmaps-36453e3e-7426-46c7-810a-a0c1c0d220e0" satisfied condition "Succeeded or Failed" May 7 00:26:35.186: INFO: Trying to get logs from node latest-worker pod pod-configmaps-36453e3e-7426-46c7-810a-a0c1c0d220e0 container configmap-volume-test: STEP: delete the pod May 7 00:26:35.204: INFO: Waiting for pod pod-configmaps-36453e3e-7426-46c7-810a-a0c1c0d220e0 to disappear May 7 00:26:35.208: INFO: Pod pod-configmaps-36453e3e-7426-46c7-810a-a0c1c0d220e0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:26:35.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5200" for this suite. • [SLOW TEST:8.531 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1445,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:26:35.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 7 00:26:35.439: INFO: Waiting up to 5m0s for pod "pod-0da2ab91-5cfb-4fe6-a3a5-3d97b4a8cd52" in namespace "emptydir-5602" to be "Succeeded or Failed" May 7 00:26:35.618: INFO: Pod "pod-0da2ab91-5cfb-4fe6-a3a5-3d97b4a8cd52": Phase="Pending", Reason="", readiness=false. Elapsed: 178.850655ms May 7 00:26:37.621: INFO: Pod "pod-0da2ab91-5cfb-4fe6-a3a5-3d97b4a8cd52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182536038s May 7 00:26:39.626: INFO: Pod "pod-0da2ab91-5cfb-4fe6-a3a5-3d97b4a8cd52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186672794s STEP: Saw pod success May 7 00:26:39.626: INFO: Pod "pod-0da2ab91-5cfb-4fe6-a3a5-3d97b4a8cd52" satisfied condition "Succeeded or Failed" May 7 00:26:39.628: INFO: Trying to get logs from node latest-worker pod pod-0da2ab91-5cfb-4fe6-a3a5-3d97b4a8cd52 container test-container: STEP: delete the pod May 7 00:26:39.679: INFO: Waiting for pod pod-0da2ab91-5cfb-4fe6-a3a5-3d97b4a8cd52 to disappear May 7 00:26:39.749: INFO: Pod pod-0da2ab91-5cfb-4fe6-a3a5-3d97b4a8cd52 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:26:39.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5602" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1445,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:26:39.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-6a90996d-f48e-414f-9f1c-bbb063d69e2a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-6a90996d-f48e-414f-9f1c-bbb063d69e2a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:26:45.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4900" for this suite. • [SLOW TEST:6.179 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":94,"skipped":1466,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:26:45.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9c1b6c20-c926-49e1-8aca-5f4a74f53c34 STEP: Creating a pod to test consume secrets May 7 00:26:46.049: INFO: Waiting up to 5m0s for pod "pod-secrets-62de0f83-a8cf-4699-95e9-f5d355599226" in namespace "secrets-9716" to be "Succeeded or Failed" May 7 00:26:46.067: INFO: Pod "pod-secrets-62de0f83-a8cf-4699-95e9-f5d355599226": Phase="Pending", Reason="", readiness=false. Elapsed: 18.069903ms May 7 00:26:48.072: INFO: Pod "pod-secrets-62de0f83-a8cf-4699-95e9-f5d355599226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022900627s May 7 00:26:50.076: INFO: Pod "pod-secrets-62de0f83-a8cf-4699-95e9-f5d355599226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027454311s STEP: Saw pod success May 7 00:26:50.076: INFO: Pod "pod-secrets-62de0f83-a8cf-4699-95e9-f5d355599226" satisfied condition "Succeeded or Failed" May 7 00:26:50.079: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-62de0f83-a8cf-4699-95e9-f5d355599226 container secret-volume-test: STEP: delete the pod May 7 00:26:50.140: INFO: Waiting for pod pod-secrets-62de0f83-a8cf-4699-95e9-f5d355599226 to disappear May 7 00:26:50.149: INFO: Pod pod-secrets-62de0f83-a8cf-4699-95e9-f5d355599226 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:26:50.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9716" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":95,"skipped":1486,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:26:50.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8291 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8291 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8291 May 7 00:26:50.305: INFO: Found 0 stateful pods, waiting for 1 May 7 00:27:00.310: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 7 00:27:00.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8291 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 00:27:03.232: INFO: stderr: "I0507 00:27:03.098819 1345 log.go:172] (0xc000d2c0b0) (0xc000836dc0) Create stream\nI0507 00:27:03.098858 1345 log.go:172] (0xc000d2c0b0) (0xc000836dc0) Stream added, broadcasting: 1\nI0507 00:27:03.101437 1345 log.go:172] (0xc000d2c0b0) Reply frame received for 1\nI0507 00:27:03.101504 1345 log.go:172] (0xc000d2c0b0) (0xc00083d040) Create stream\nI0507 00:27:03.101529 1345 log.go:172] (0xc000d2c0b0) (0xc00083d040) Stream added, broadcasting: 3\nI0507 00:27:03.102511 1345 log.go:172] (0xc000d2c0b0) Reply frame received for 3\nI0507 00:27:03.102541 1345 log.go:172] (0xc000d2c0b0) (0xc000837d60) Create stream\nI0507 00:27:03.102554 1345 log.go:172] (0xc000d2c0b0) (0xc000837d60) Stream added, broadcasting: 5\nI0507 00:27:03.104232 1345 log.go:172] (0xc000d2c0b0) Reply frame received for 5\nI0507 00:27:03.157884 1345 log.go:172] (0xc000d2c0b0) Data frame received for 5\nI0507 00:27:03.157919 1345 log.go:172] (0xc000837d60) (5) Data frame handling\nI0507 00:27:03.157945 1345 log.go:172] (0xc000837d60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 00:27:03.224455 1345 log.go:172] (0xc000d2c0b0) Data frame received for 3\nI0507 00:27:03.224494 1345 log.go:172] (0xc00083d040) (3) Data frame handling\nI0507 00:27:03.224520 1345 log.go:172] (0xc00083d040) (3) Data frame sent\nI0507 00:27:03.224532 1345 log.go:172] (0xc000d2c0b0) Data frame received for 3\nI0507 00:27:03.224540 1345 log.go:172] (0xc00083d040) (3) Data frame handling\nI0507 00:27:03.224710 1345 log.go:172] (0xc000d2c0b0) Data frame received for 5\nI0507 00:27:03.224750 1345 log.go:172] (0xc000837d60) (5) Data frame handling\nI0507 00:27:03.226733 1345 log.go:172] (0xc000d2c0b0) Data frame received for 1\nI0507 00:27:03.226765 1345 log.go:172] (0xc000836dc0) (1) Data frame handling\nI0507 00:27:03.226792 1345 log.go:172] (0xc000836dc0) (1) Data frame sent\nI0507 00:27:03.226811 1345 log.go:172] (0xc000d2c0b0) (0xc000836dc0) Stream removed, broadcasting: 1\nI0507 00:27:03.226832 1345 log.go:172] (0xc000d2c0b0) Go away received\nI0507 00:27:03.227154 1345 log.go:172] (0xc000d2c0b0) (0xc000836dc0) Stream removed, broadcasting: 1\nI0507 00:27:03.227170 1345 log.go:172] (0xc000d2c0b0) (0xc00083d040) Stream removed, broadcasting: 3\nI0507 00:27:03.227178 1345 log.go:172] (0xc000d2c0b0) (0xc000837d60) Stream removed, broadcasting: 5\n" May 7 00:27:03.232: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 00:27:03.232: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 7 00:27:03.236: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 7 00:27:13.240: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 7 00:27:13.240: INFO: Waiting for statefulset status.replicas updated to 0 May 7 00:27:13.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999733s May 7 00:27:14.270: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.98779508s May 7 00:27:15.275: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981664449s May 7 00:27:16.280: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.976489113s May 7 00:27:17.285: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.971338185s May 7 00:27:18.290: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.966014503s May 7 00:27:19.295: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.961191154s May 7 00:27:20.300: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.956535523s May 7 00:27:21.304: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.951795133s May 7 00:27:22.309: INFO: Verifying statefulset ss doesn't scale past 1 for another 947.867414ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8291 May 7 00:27:23.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8291 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 7 00:27:23.533: INFO: stderr: "I0507 00:27:23.457447 1376 log.go:172] (0xc000a540b0) (0xc000252fa0) Create stream\nI0507 00:27:23.457527 1376 log.go:172] (0xc000a540b0) (0xc000252fa0) Stream added, broadcasting: 1\nI0507 00:27:23.460905 1376 log.go:172] (0xc000a540b0) Reply frame received for 1\nI0507 00:27:23.460958 1376 log.go:172] (0xc000a540b0) (0xc0001534a0) Create stream\nI0507 00:27:23.460982 1376 log.go:172] (0xc000a540b0) (0xc0001534a0) Stream added, broadcasting: 3\nI0507 00:27:23.462630 1376 log.go:172] (0xc000a540b0) Reply frame received for 3\nI0507 00:27:23.462701 1376 log.go:172] (0xc000a540b0) (0xc0003f8280) Create stream\nI0507 00:27:23.462724 1376 log.go:172] (0xc000a540b0) (0xc0003f8280) Stream added, broadcasting: 5\nI0507 00:27:23.463873 1376 log.go:172] (0xc000a540b0) Reply frame received for 5\nI0507 00:27:23.522385 1376 log.go:172] (0xc000a540b0) Data frame received for 3\nI0507 00:27:23.522422 1376 log.go:172] (0xc0001534a0) (3) Data frame handling\nI0507 00:27:23.522446 1376 log.go:172] (0xc0001534a0) (3) Data frame sent\nI0507 00:27:23.522459 1376 log.go:172] (0xc000a540b0) Data frame received for 3\nI0507 00:27:23.522470 1376 log.go:172] (0xc0001534a0) (3) Data frame handling\nI0507 00:27:23.522494 1376 log.go:172] (0xc000a540b0) Data frame received for 5\nI0507 00:27:23.522534 1376 log.go:172] (0xc0003f8280) (5) Data frame handling\nI0507 00:27:23.522560 1376 log.go:172] (0xc0003f8280) (5) Data frame sent\nI0507 00:27:23.522575 1376 log.go:172] (0xc000a540b0) Data frame received for 5\nI0507 00:27:23.522588 1376 log.go:172] (0xc0003f8280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0507 00:27:23.525793 1376 log.go:172] (0xc000a540b0) Data frame received for 1\nI0507 00:27:23.525839 1376 log.go:172] (0xc000252fa0) (1) Data frame handling\nI0507 00:27:23.525863 1376 log.go:172] (0xc000252fa0) (1) Data frame sent\nI0507 00:27:23.526446 1376 log.go:172] (0xc000a540b0) (0xc000252fa0) Stream removed, broadcasting: 1\nI0507 00:27:23.526990 1376 log.go:172] (0xc000a540b0) Go away received\nI0507 00:27:23.527091 1376 log.go:172] (0xc000a540b0) (0xc000252fa0) Stream removed, broadcasting: 1\nI0507 00:27:23.527122 1376 log.go:172] (0xc000a540b0) (0xc0001534a0) Stream removed, broadcasting: 3\nI0507 00:27:23.527144 1376 log.go:172] (0xc000a540b0) (0xc0003f8280) Stream removed, broadcasting: 5\n" May 7 00:27:23.533: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 7 00:27:23.533: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 7 00:27:23.536: INFO: Found 1 stateful pods, waiting for 3 May 7 00:27:33.553: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 7 00:27:33.553: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 7 00:27:33.553: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 7 00:27:33.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8291 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 00:27:33.802: INFO: stderr: "I0507 00:27:33.694439 1395 log.go:172] (0xc0006ac2c0) (0xc00021dd60) Create stream\nI0507 00:27:33.694488 1395 log.go:172] (0xc0006ac2c0) (0xc00021dd60) Stream added, broadcasting: 1\nI0507 00:27:33.696539 1395 log.go:172] (0xc0006ac2c0) Reply frame received for 1\nI0507 00:27:33.696579 1395 log.go:172] (0xc0006ac2c0) (0xc0001390e0) Create stream\nI0507 00:27:33.696599 1395 log.go:172] (0xc0006ac2c0) (0xc0001390e0) Stream added, broadcasting: 3\nI0507 00:27:33.697789 1395 log.go:172] (0xc0006ac2c0) Reply frame received for 3\nI0507 00:27:33.697819 1395 log.go:172] (0xc0006ac2c0) (0xc0000be1e0) Create stream\nI0507 00:27:33.697830 1395 log.go:172] (0xc0006ac2c0) (0xc0000be1e0) Stream added, broadcasting: 5\nI0507 00:27:33.698815 1395 log.go:172] (0xc0006ac2c0) Reply frame received for 5\nI0507 00:27:33.795763 1395 log.go:172] (0xc0006ac2c0) Data frame received for 5\nI0507 00:27:33.795801 1395 log.go:172] (0xc0000be1e0) (5) Data frame handling\nI0507 00:27:33.795818 1395 log.go:172] (0xc0000be1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 00:27:33.795856 1395 log.go:172] (0xc0006ac2c0) Data frame received for 5\nI0507 00:27:33.795866 1395 log.go:172] (0xc0000be1e0) (5) Data frame handling\nI0507 00:27:33.795888 1395 log.go:172] (0xc0006ac2c0) Data frame received for 3\nI0507 00:27:33.795898 1395 log.go:172] (0xc0001390e0) (3) Data frame handling\nI0507 00:27:33.795909 1395 log.go:172] (0xc0001390e0) (3) Data frame sent\nI0507 00:27:33.795917 1395 log.go:172] (0xc0006ac2c0) Data frame received for 3\nI0507 00:27:33.795924 1395 log.go:172] (0xc0001390e0) (3) Data frame handling\nI0507 00:27:33.797062 1395 log.go:172] (0xc0006ac2c0) Data frame received for 1\nI0507 00:27:33.797082 1395 log.go:172] (0xc00021dd60) (1) Data frame handling\nI0507 00:27:33.797094 1395 log.go:172] (0xc00021dd60) (1) Data frame sent\nI0507 00:27:33.797106 1395 log.go:172] (0xc0006ac2c0) (0xc00021dd60) Stream removed, broadcasting: 1\nI0507 00:27:33.797249 1395 log.go:172] (0xc0006ac2c0) Go away received\nI0507 00:27:33.797577 1395 log.go:172] (0xc0006ac2c0) (0xc00021dd60) Stream removed, broadcasting: 1\nI0507 00:27:33.797598 1395 log.go:172] (0xc0006ac2c0) (0xc0001390e0) Stream removed, broadcasting: 3\nI0507 00:27:33.797610 1395 log.go:172] (0xc0006ac2c0) (0xc0000be1e0) Stream removed, broadcasting: 5\n" May 7 00:27:33.802: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 00:27:33.802: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 7 00:27:33.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8291 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 00:27:34.046: INFO: stderr: "I0507 00:27:33.926477 1417 log.go:172] (0xc000b52840) (0xc000479180) Create stream\nI0507 00:27:33.926537 1417 log.go:172] (0xc000b52840) (0xc000479180) Stream added, broadcasting: 1\nI0507 00:27:33.929992 1417 log.go:172] (0xc000b52840) Reply frame received for 1\nI0507 00:27:33.930042 1417 log.go:172] (0xc000b52840) (0xc00034cd20) Create stream\nI0507 00:27:33.930060 1417 log.go:172] (0xc000b52840) (0xc00034cd20) Stream added, broadcasting: 3\nI0507 00:27:33.931107 1417 log.go:172] (0xc000b52840) Reply frame received for 3\nI0507 00:27:33.931141 1417 log.go:172] (0xc000b52840) (0xc000306460) Create stream\nI0507 00:27:33.931170 1417 log.go:172] (0xc000b52840) (0xc000306460) Stream added, broadcasting: 5\nI0507 00:27:33.932163 1417 log.go:172] (0xc000b52840) Reply frame received for 5\nI0507 00:27:34.010163 1417 log.go:172] (0xc000b52840) Data frame received for 5\nI0507 00:27:34.010187 1417 log.go:172] (0xc000306460) (5) Data frame handling\nI0507 00:27:34.010200 1417 log.go:172] (0xc000306460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 00:27:34.036786 1417 log.go:172] (0xc000b52840) Data frame received for 3\nI0507 00:27:34.036828 1417 log.go:172] (0xc00034cd20) (3) Data frame handling\nI0507 00:27:34.036858 1417 log.go:172] (0xc00034cd20) (3) Data frame sent\nI0507 00:27:34.037329 1417 log.go:172] (0xc000b52840) Data frame received for 5\nI0507 00:27:34.037399 1417 log.go:172] (0xc000306460) (5) Data frame handling\nI0507 00:27:34.038525 1417 log.go:172] (0xc000b52840) Data frame received for 3\nI0507 00:27:34.038556 1417 log.go:172] (0xc00034cd20) (3) Data frame handling\nI0507 00:27:34.040381 1417 log.go:172] (0xc000b52840) Data frame received for 1\nI0507 00:27:34.040414 1417 log.go:172] (0xc000479180) (1) Data frame handling\nI0507 00:27:34.040451 1417 log.go:172] (0xc000479180) (1) Data frame sent\nI0507 00:27:34.040478 1417 log.go:172] (0xc000b52840) (0xc000479180) Stream removed, broadcasting: 1\nI0507 00:27:34.040653 1417 log.go:172] (0xc000b52840) Go away received\nI0507 00:27:34.040903 1417 log.go:172] (0xc000b52840) (0xc000479180) Stream removed, broadcasting: 1\nI0507 00:27:34.040927 1417 log.go:172] (0xc000b52840) (0xc00034cd20) Stream removed, broadcasting: 3\nI0507 00:27:34.040952 1417 log.go:172] (0xc000b52840) (0xc000306460) Stream removed, broadcasting: 5\n" May 7 00:27:34.047: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 00:27:34.047: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 7 00:27:34.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8291 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 00:27:34.297: INFO: stderr: "I0507 00:27:34.179143 1441 log.go:172] (0xc000acb600) (0xc0003585a0) Create stream\nI0507 00:27:34.179196 1441 log.go:172] (0xc000acb600) (0xc0003585a0) Stream added, broadcasting: 1\nI0507 00:27:34.181951 1441 log.go:172] (0xc000acb600) Reply frame received for 1\nI0507 00:27:34.182003 1441 log.go:172] (0xc000acb600) (0xc0005160a0) Create stream\nI0507 00:27:34.182027 1441 log.go:172] (0xc000acb600) (0xc0005160a0) Stream added, broadcasting: 3\nI0507 00:27:34.182875 1441 log.go:172] (0xc000acb600) Reply frame received for 3\nI0507 00:27:34.182915 1441 log.go:172] (0xc000acb600) (0xc000358aa0) Create stream\nI0507 00:27:34.182932 1441 log.go:172] (0xc000acb600) (0xc000358aa0) Stream added, broadcasting: 5\nI0507 00:27:34.183774 1441 log.go:172] (0xc000acb600) Reply frame received for 5\nI0507 00:27:34.261603 1441 log.go:172] (0xc000acb600) Data frame received for 5\nI0507 00:27:34.261637 1441 log.go:172] (0xc000358aa0) (5) Data frame handling\nI0507 00:27:34.261658 1441 log.go:172] (0xc000358aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 00:27:34.288201 1441 log.go:172] (0xc000acb600) Data frame received for 3\nI0507 00:27:34.288241 1441 log.go:172] (0xc0005160a0) (3) Data frame handling\nI0507 00:27:34.288280 1441 log.go:172] (0xc0005160a0) (3) Data frame sent\nI0507 00:27:34.288422 1441 log.go:172] (0xc000acb600) Data frame received for 5\nI0507 00:27:34.288456 1441 log.go:172] (0xc000358aa0) (5) Data frame handling\nI0507 00:27:34.288538 1441 log.go:172] (0xc000acb600) Data frame received for 3\nI0507 00:27:34.288575 1441 log.go:172] (0xc0005160a0) (3) Data frame handling\nI0507 00:27:34.290707 1441 log.go:172] (0xc000acb600) Data frame received for 1\nI0507 00:27:34.290744 1441 log.go:172] (0xc0003585a0) (1) Data frame handling\nI0507 00:27:34.290767 1441 log.go:172] (0xc0003585a0) (1) Data frame sent\nI0507 00:27:34.290787 1441 log.go:172] (0xc000acb600) (0xc0003585a0) Stream removed, broadcasting: 1\nI0507 00:27:34.290830 1441 log.go:172] (0xc000acb600) Go away received\nI0507 00:27:34.291316 1441 log.go:172] (0xc000acb600) (0xc0003585a0) Stream removed, broadcasting: 1\nI0507 00:27:34.291343 1441 log.go:172] (0xc000acb600) (0xc0005160a0) Stream removed, broadcasting: 3\nI0507 00:27:34.291363 1441 log.go:172] (0xc000acb600) (0xc000358aa0) Stream removed, broadcasting: 5\n" May 7 00:27:34.297: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 00:27:34.297: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 7 00:27:34.297: INFO: Waiting for statefulset status.replicas updated to 0 May 7 00:27:34.301: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 7 00:27:44.322: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 7 00:27:44.322: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 7 00:27:44.322: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 7 00:27:44.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999762s May 7 00:27:45.358: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975636211s May 7 00:27:46.364: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970707989s May 7 00:27:47.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964808823s May 7 00:27:48.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959681556s May 7 00:27:49.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954552238s May 7 00:27:50.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.949952043s May 7 00:27:51.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.944994275s May 7 00:27:52.394: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.940650769s May 7 00:27:53.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.107587ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8291 May 7 00:27:54.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8291 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 7 00:27:54.645: INFO: stderr: "I0507 00:27:54.544152 1461 log.go:172] (0xc0009c62c0) (0xc0003a68c0) Create stream\nI0507 00:27:54.544202 1461 log.go:172] (0xc0009c62c0) (0xc0003a68c0) Stream added, broadcasting: 1\nI0507 00:27:54.546500 1461 log.go:172] (0xc0009c62c0) Reply frame received for 1\nI0507 00:27:54.546559 1461 log.go:172] (0xc0009c62c0) (0xc0006ecdc0) Create stream\nI0507 00:27:54.546576 1461 log.go:172] (0xc0009c62c0) (0xc0006ecdc0) Stream added, broadcasting: 3\nI0507 00:27:54.547672 1461 log.go:172] (0xc0009c62c0) Reply frame received for 3\nI0507 00:27:54.547704 1461 log.go:172] (0xc0009c62c0) (0xc0003a6fa0) Create stream\nI0507 00:27:54.547716 1461 log.go:172] (0xc0009c62c0) (0xc0003a6fa0) Stream added, broadcasting: 5\nI0507 00:27:54.548770 1461 log.go:172] (0xc0009c62c0) Reply frame received for 5\nI0507 00:27:54.636948 1461 log.go:172] (0xc0009c62c0) Data frame received for 5\nI0507 00:27:54.637012 1461 log.go:172] (0xc0003a6fa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0507 00:27:54.637056 1461 log.go:172] (0xc0009c62c0) Data frame received for 3\nI0507 00:27:54.637323 1461 log.go:172] (0xc0006ecdc0) (3) Data frame handling\nI0507 00:27:54.637358 1461 log.go:172] (0xc0006ecdc0) (3) Data frame sent\nI0507 00:27:54.637383 1461 log.go:172] (0xc0009c62c0) Data frame received for 3\nI0507 00:27:54.637408 1461 log.go:172] (0xc0006ecdc0) (3) Data frame handling\nI0507 00:27:54.637438 1461 log.go:172] (0xc0003a6fa0) (5) Data frame sent\nI0507 00:27:54.637460 1461 log.go:172] (0xc0009c62c0) Data frame received for 5\nI0507 00:27:54.637477 1461 log.go:172] (0xc0003a6fa0) (5) Data frame handling\nI0507 00:27:54.639186 1461 log.go:172] (0xc0009c62c0) Data frame received for 1\nI0507 00:27:54.639216 1461 log.go:172] (0xc0003a68c0) (1) Data frame handling\nI0507 00:27:54.639231 1461 log.go:172] (0xc0003a68c0) (1) Data frame sent\nI0507 00:27:54.639248 1461 log.go:172] (0xc0009c62c0) (0xc0003a68c0) Stream removed, broadcasting: 1\nI0507 00:27:54.639264 1461 log.go:172] (0xc0009c62c0) Go away received\nI0507 00:27:54.639779 1461 log.go:172] (0xc0009c62c0) (0xc0003a68c0) Stream removed, broadcasting: 1\nI0507 00:27:54.639806 1461 log.go:172] (0xc0009c62c0) (0xc0006ecdc0) Stream removed, broadcasting: 3\nI0507 00:27:54.639825 1461 log.go:172] (0xc0009c62c0) (0xc0003a6fa0) Stream removed, broadcasting: 5\n" May 7 00:27:54.646: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 7 00:27:54.646: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 7 00:27:54.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8291 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 7 00:27:54.924: INFO: stderr: "I0507 00:27:54.788325 1484 log.go:172] (0xc00069abb0) (0xc0006645a0) Create stream\nI0507 00:27:54.788387 1484 log.go:172] (0xc00069abb0) (0xc0006645a0) Stream added, broadcasting: 1\nI0507 00:27:54.790830 1484 log.go:172] (0xc00069abb0) Reply frame received for 1\nI0507 00:27:54.790874 1484 log.go:172] (0xc00069abb0) (0xc0005ae5a0) Create stream\nI0507 00:27:54.790895 1484 log.go:172] (0xc00069abb0) (0xc0005ae5a0) Stream added, broadcasting: 3\nI0507 00:27:54.791690 1484 log.go:172] (0xc00069abb0) Reply frame received for 3\nI0507 00:27:54.791725 1484 log.go:172] (0xc00069abb0) (0xc0005af9a0) Create stream\nI0507 00:27:54.791739 1484 log.go:172] (0xc00069abb0) (0xc0005af9a0) Stream added, broadcasting: 5\nI0507 00:27:54.792467 1484 log.go:172] (0xc00069abb0) Reply frame received for 5\nI0507 00:27:54.917628 1484 log.go:172] (0xc00069abb0) Data frame received for 5\nI0507 00:27:54.917664 1484 log.go:172] (0xc0005af9a0) (5) Data frame handling\nI0507 00:27:54.917691 1484 log.go:172] (0xc0005af9a0) (5) Data frame sent\nI0507 00:27:54.917703 1484 log.go:172] (0xc00069abb0) Data frame received for 5\nI0507 00:27:54.917718 1484 log.go:172] (0xc0005af9a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0507 00:27:54.917820 1484 log.go:172] (0xc00069abb0) Data frame received for 3\nI0507 00:27:54.917839 1484 log.go:172] (0xc0005ae5a0) (3) Data frame handling\nI0507 00:27:54.917868 1484 log.go:172] (0xc0005ae5a0) (3) Data frame sent\nI0507 00:27:54.917978 1484 log.go:172] (0xc00069abb0) Data frame received for 3\nI0507 00:27:54.918001 1484 log.go:172] (0xc0005ae5a0) (3) Data frame handling\nI0507 00:27:54.919372 1484 log.go:172] (0xc00069abb0) Data frame received for 1\nI0507 00:27:54.919399 1484 log.go:172] (0xc0006645a0) (1) Data frame handling\nI0507 00:27:54.919419 1484 log.go:172] (0xc0006645a0) (1) Data frame sent\nI0507 00:27:54.919663 1484 log.go:172] (0xc00069abb0) (0xc0006645a0) Stream removed, broadcasting: 1\nI0507 00:27:54.919853 1484 log.go:172] (0xc00069abb0) Go away received\nI0507 00:27:54.920077 1484 log.go:172] (0xc00069abb0) (0xc0006645a0) Stream removed, broadcasting: 1\nI0507 00:27:54.920096 1484 log.go:172] (0xc00069abb0) (0xc0005ae5a0) Stream removed, broadcasting: 3\nI0507 00:27:54.920109 1484 log.go:172] (0xc00069abb0) (0xc0005af9a0) Stream removed, broadcasting: 5\n" May 7 00:27:54.924: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 7 00:27:54.924: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 7 00:27:54.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8291 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 7 00:27:55.162: INFO: stderr: "I0507 00:27:55.100847 1505 log.go:172] (0xc000a744d0) (0xc000a5a140) Create stream\nI0507 00:27:55.100892 1505 log.go:172] (0xc000a744d0) (0xc000a5a140) Stream added, broadcasting: 1\nI0507 00:27:55.102814 1505 log.go:172] (0xc000a744d0) Reply frame received for 1\nI0507 00:27:55.102858 1505 log.go:172] (0xc000a744d0) (0xc0007499a0) Create stream\nI0507 00:27:55.102879 1505 log.go:172] (0xc000a744d0) (0xc0007499a0) Stream added, broadcasting: 3\nI0507 00:27:55.103568 1505 log.go:172] (0xc000a744d0) Reply frame received for 3\nI0507 00:27:55.103598 1505 log.go:172] (0xc000a744d0) (0xc000a92000) Create stream\nI0507 00:27:55.103605 1505 log.go:172] (0xc000a744d0) (0xc000a92000) Stream added, broadcasting: 5\nI0507 00:27:55.104315 1505 log.go:172] (0xc000a744d0) Reply frame received for 5\nI0507 00:27:55.156057 1505 log.go:172] (0xc000a744d0) Data frame received for 5\nI0507 00:27:55.156106 1505 log.go:172] (0xc000a92000) (5) Data frame handling\nI0507 00:27:55.156124 1505 log.go:172] (0xc000a92000) (5) Data frame sent\nI0507 00:27:55.156138 1505 log.go:172] (0xc000a744d0) Data frame received for 5\nI0507 00:27:55.156147 1505 log.go:172] (0xc000a92000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0507 00:27:55.156182 1505 log.go:172] (0xc000a744d0) Data frame received for 3\nI0507 00:27:55.156214 1505 log.go:172] (0xc0007499a0) (3) Data frame handling\nI0507 00:27:55.156228 1505 log.go:172] (0xc0007499a0) (3) Data frame sent\nI0507 00:27:55.156237 1505 log.go:172] (0xc000a744d0) Data frame received for 3\nI0507 00:27:55.156248 1505 log.go:172] (0xc0007499a0) (3) Data frame handling\nI0507 00:27:55.157495 1505 log.go:172] (0xc000a744d0) Data frame received for 1\nI0507 00:27:55.157510 1505 log.go:172] (0xc000a5a140) (1) Data frame handling\nI0507 00:27:55.157525 1505 log.go:172] (0xc000a5a140) (1) Data frame sent\nI0507 00:27:55.157564 1505 log.go:172] (0xc000a744d0) (0xc000a5a140) Stream removed, broadcasting: 1\nI0507 00:27:55.157638 1505 log.go:172] (0xc000a744d0) Go away received\nI0507 00:27:55.157829 1505 log.go:172] (0xc000a744d0) (0xc000a5a140) Stream removed, broadcasting: 1\nI0507 00:27:55.157850 1505 log.go:172] (0xc000a744d0) (0xc0007499a0) Stream removed, broadcasting: 3\nI0507 00:27:55.157866 1505 log.go:172] (0xc000a744d0) (0xc000a92000) Stream removed, broadcasting: 5\n" May 7 00:27:55.162: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 7 00:27:55.162: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 7 00:27:55.162: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 7 00:28:15.229: INFO: Deleting all statefulset in ns statefulset-8291 May 7 00:28:15.231: INFO: Scaling statefulset ss to 0 May 7 00:28:15.240: INFO: Waiting for statefulset status.replicas updated to 0 May 7 00:28:15.242: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:28:15.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8291" for this suite. • [SLOW TEST:85.105 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":96,"skipped":1489,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:28:15.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:28:15.328: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:28:16.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2689" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":97,"skipped":1494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:28:16.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 7 00:28:16.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7669' May 7 00:28:16.580: INFO: stderr: "" May 7 00:28:16.580: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 7 00:28:16.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7669' May 7 00:28:21.401: INFO: stderr: "" May 7 00:28:21.401: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:28:21.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7669" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":98,"skipped":1523,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:28:21.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:28:25.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5035" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":99,"skipped":1525,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:28:25.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:28:25.641: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 7 00:28:25.672: INFO: Pod name sample-pod: Found 0 pods out of 1 May 7 00:28:30.678: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 7 00:28:30.678: INFO: Creating deployment "test-rolling-update-deployment" May 7 00:28:30.683: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 7 00:28:30.711: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 7 00:28:32.720: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 7 00:28:32.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408110, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408110, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408110, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408110, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:28:34.727: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 7 00:28:34.738: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2781 /apis/apps/v1/namespaces/deployment-2781/deployments/test-rolling-update-deployment 956d6111-0437-4f6b-adb7-53d99ed37a14 2169074 1 2020-05-07 00:28:30 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-07 00:28:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-07 00:28:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d7df08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-07 00:28:30 +0000 UTC,LastTransitionTime:2020-05-07 00:28:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-07 00:28:33 +0000 UTC,LastTransitionTime:2020-05-07 00:28:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 7 00:28:34.741: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-2781 /apis/apps/v1/namespaces/deployment-2781/replicasets/test-rolling-update-deployment-df7bb669b 1fc0f30d-16a2-4d97-b80b-dd42c9326384 2169063 1 2020-05-07 00:28:30 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 956d6111-0437-4f6b-adb7-53d99ed37a14 0xc002b8e460 0xc002b8e461}] [] [{kube-controller-manager Update apps/v1 2020-05-07 00:28:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"956d6111-0437-4f6b-adb7-53d99ed37a14\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b8e4d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 7 00:28:34.741: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 7 00:28:34.741: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2781 /apis/apps/v1/namespaces/deployment-2781/replicasets/test-rolling-update-controller fcb59b92-bb2c-4ec7-993f-680a28e7831a 2169073 2 2020-05-07 00:28:25 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 956d6111-0437-4f6b-adb7-53d99ed37a14 0xc002b8e357 0xc002b8e358}] [] [{e2e.test Update apps/v1 2020-05-07 00:28:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-07 00:28:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"956d6111-0437-4f6b-adb7-53d99ed37a14\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b8e3f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 7 00:28:34.744: INFO: Pod "test-rolling-update-deployment-df7bb669b-47vjl" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-47vjl test-rolling-update-deployment-df7bb669b- deployment-2781 /api/v1/namespaces/deployment-2781/pods/test-rolling-update-deployment-df7bb669b-47vjl d7e1453e-18c6-48ad-ae4d-ac1a6631aeb0 2169062 0 2020-05-07 00:28:30 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 1fc0f30d-16a2-4d97-b80b-dd42c9326384 0xc002b8e9a0 0xc002b8e9a1}] [] [{kube-controller-manager Update v1 2020-05-07 00:28:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fc0f30d-16a2-4d97-b80b-dd42c9326384\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:28:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.169\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k5nc6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k5nc6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k5nc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:28:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:28:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:28:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:28:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.169,StartTime:2020-05-07 00:28:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:28:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://38d989de20a62aa9e884d3175f58562107da5a3482d0b1818fd676ec83fc715c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:28:34.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2781" for this suite. • [SLOW TEST:9.178 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":100,"skipped":1530,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:28:34.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7460 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 7 00:28:34.890: INFO: Found 0 stateful pods, waiting for 3 May 7 00:28:44.894: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 00:28:44.894: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 00:28:44.894: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 7 00:28:54.894: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 00:28:54.894: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 00:28:54.894: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 7 00:28:54.952: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 7 00:29:05.008: INFO: Updating stateful set ss2 May 7 00:29:05.128: INFO: Waiting for Pod statefulset-7460/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 7 00:29:16.147: INFO: Found 2 stateful pods, waiting for 3 May 7 00:29:26.151: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 00:29:26.151: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 00:29:26.151: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 7 00:29:26.176: INFO: Updating stateful set ss2 May 7 00:29:26.227: INFO: Waiting for Pod statefulset-7460/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 7 00:29:36.250: INFO: Updating stateful set ss2 May 7 00:29:36.323: INFO: Waiting for StatefulSet statefulset-7460/ss2 to complete update May 7 00:29:36.323: INFO: Waiting for Pod statefulset-7460/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 7 00:29:46.328: INFO: Deleting all statefulset in ns statefulset-7460 May 7 00:29:46.330: INFO: Scaling statefulset ss2 to 0 May 7 00:30:06.853: INFO: Waiting for statefulset status.replicas updated to 0 May 7 00:30:06.856: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:30:07.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7460" for this suite. • [SLOW TEST:92.266 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":101,"skipped":1532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:30:07.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 7 00:30:07.150: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:30:15.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2204" for this suite. • [SLOW TEST:8.590 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":102,"skipped":1564,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:30:15.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-405edc72-1698-4f6c-a5e4-8c23ff4526a4 STEP: Creating a pod to test consume secrets May 7 00:30:15.726: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ebce949c-1096-4443-8533-de34c0abfff0" in namespace "projected-299" to be "Succeeded or Failed" May 7 00:30:15.730: INFO: Pod "pod-projected-secrets-ebce949c-1096-4443-8533-de34c0abfff0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231846ms May 7 00:30:17.734: INFO: Pod "pod-projected-secrets-ebce949c-1096-4443-8533-de34c0abfff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008573271s May 7 00:30:19.738: INFO: Pod "pod-projected-secrets-ebce949c-1096-4443-8533-de34c0abfff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012564406s STEP: Saw pod success May 7 00:30:19.738: INFO: Pod "pod-projected-secrets-ebce949c-1096-4443-8533-de34c0abfff0" satisfied condition "Succeeded or Failed" May 7 00:30:19.741: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ebce949c-1096-4443-8533-de34c0abfff0 container projected-secret-volume-test: STEP: delete the pod May 7 00:30:19.774: INFO: Waiting for pod pod-projected-secrets-ebce949c-1096-4443-8533-de34c0abfff0 to disappear May 7 00:30:19.786: INFO: Pod pod-projected-secrets-ebce949c-1096-4443-8533-de34c0abfff0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:30:19.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-299" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:30:19.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3624.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3624.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 00:30:28.210: INFO: DNS probes using dns-test-0ec4d39c-b362-4858-b0bd-8f0948c4241f succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3624.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3624.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 00:30:34.358: INFO: File wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 00:30:34.361: INFO: File jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 00:30:34.361: INFO: Lookups using dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 failed for: [wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local] May 7 00:30:39.369: INFO: File wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 00:30:39.373: INFO: File jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 00:30:39.373: INFO: Lookups using dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 failed for: [wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local] May 7 00:30:44.413: INFO: File wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 00:30:44.417: INFO: File jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 00:30:44.417: INFO: Lookups using dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 failed for: [wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local] May 7 00:30:49.369: INFO: File wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains '' instead of 'bar.example.com.' May 7 00:30:49.375: INFO: File jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 00:30:49.375: INFO: Lookups using dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 failed for: [wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local] May 7 00:30:54.366: INFO: File wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 00:30:54.369: INFO: File jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local from pod dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 00:30:54.369: INFO: Lookups using dns-3624/dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 failed for: [wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local] May 7 00:30:59.368: INFO: DNS probes using dns-test-fe082d1e-ef46-4a90-8e17-60d3d7fb5a40 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3624.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3624.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3624.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3624.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 00:31:08.329: INFO: DNS probes using dns-test-fbaed4de-5c6b-4e96-a15a-cb9e94372fa2 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:31:08.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3624" for this suite. • [SLOW TEST:49.025 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":104,"skipped":1600,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:31:08.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 7 00:31:08.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2994' May 7 00:31:09.153: INFO: stderr: "" May 7 00:31:09.153: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 00:31:09.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2994' May 7 00:31:09.292: INFO: stderr: "" May 7 00:31:09.292: INFO: stdout: "update-demo-nautilus-bwgwb update-demo-nautilus-v9fj6 " May 7 00:31:09.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bwgwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2994' May 7 00:31:09.410: INFO: stderr: "" May 7 00:31:09.410: INFO: stdout: "" May 7 00:31:09.410: INFO: update-demo-nautilus-bwgwb is created but not running May 7 00:31:14.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2994' May 7 00:31:14.549: INFO: stderr: "" May 7 00:31:14.549: INFO: stdout: "update-demo-nautilus-bwgwb update-demo-nautilus-v9fj6 " May 7 00:31:14.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bwgwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2994' May 7 00:31:15.938: INFO: stderr: "" May 7 00:31:15.938: INFO: stdout: "" May 7 00:31:15.938: INFO: update-demo-nautilus-bwgwb is created but not running May 7 00:31:20.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2994' May 7 00:31:21.034: INFO: stderr: "" May 7 00:31:21.034: INFO: stdout: "update-demo-nautilus-bwgwb update-demo-nautilus-v9fj6 " May 7 00:31:21.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bwgwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2994' May 7 00:31:21.140: INFO: stderr: "" May 7 00:31:21.140: INFO: stdout: "true" May 7 00:31:21.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bwgwb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2994' May 7 00:31:21.239: INFO: stderr: "" May 7 00:31:21.239: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 00:31:21.239: INFO: validating pod update-demo-nautilus-bwgwb May 7 00:31:21.242: INFO: got data: { "image": "nautilus.jpg" } May 7 00:31:21.242: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 00:31:21.242: INFO: update-demo-nautilus-bwgwb is verified up and running May 7 00:31:21.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9fj6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2994' May 7 00:31:21.338: INFO: stderr: "" May 7 00:31:21.338: INFO: stdout: "true" May 7 00:31:21.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9fj6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2994' May 7 00:31:21.429: INFO: stderr: "" May 7 00:31:21.429: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 00:31:21.429: INFO: validating pod update-demo-nautilus-v9fj6 May 7 00:31:21.433: INFO: got data: { "image": "nautilus.jpg" } May 7 00:31:21.433: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 00:31:21.433: INFO: update-demo-nautilus-v9fj6 is verified up and running STEP: using delete to clean up resources May 7 00:31:21.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2994' May 7 00:31:21.538: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 00:31:21.538: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 7 00:31:21.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2994' May 7 00:31:21.646: INFO: stderr: "No resources found in kubectl-2994 namespace.\n" May 7 00:31:21.646: INFO: stdout: "" May 7 00:31:21.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2994 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 00:31:21.747: INFO: stderr: "" May 7 00:31:21.747: INFO: stdout: "update-demo-nautilus-bwgwb\nupdate-demo-nautilus-v9fj6\n" May 7 00:31:22.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2994' May 7 00:31:22.350: INFO: stderr: "No resources found in kubectl-2994 namespace.\n" May 7 00:31:22.350: INFO: stdout: "" May 7 00:31:22.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2994 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 00:31:22.869: INFO: stderr: "" May 7 00:31:22.869: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:31:22.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2994" for this suite. • [SLOW TEST:14.235 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":105,"skipped":1606,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:31:23.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-902baf8e-0076-4f8a-b078-32ac86d94b7c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-902baf8e-0076-4f8a-b078-32ac86d94b7c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:31:30.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8690" for this suite. • [SLOW TEST:7.140 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":106,"skipped":1664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:31:30.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 7 00:31:30.260: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 7 00:31:41.434: INFO: >>> kubeConfig: /root/.kube/config May 7 00:31:44.530: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:31:56.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6228" for this suite. • [SLOW TEST:26.226 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":107,"skipped":1693,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:31:56.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 7 00:32:02.547: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9062 PodName:pod-sharedvolume-73bf8fb0-0676-4576-a033-79f344719441 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:32:02.547: INFO: >>> kubeConfig: /root/.kube/config I0507 00:32:02.567322 7 log.go:172] (0xc002ebf4a0) (0xc00080e3c0) Create stream I0507 00:32:02.567351 7 log.go:172] (0xc002ebf4a0) (0xc00080e3c0) Stream added, broadcasting: 1 I0507 00:32:02.568714 7 log.go:172] (0xc002ebf4a0) Reply frame received for 1 I0507 00:32:02.568771 7 log.go:172] (0xc002ebf4a0) (0xc0003ea8c0) Create stream I0507 00:32:02.568782 7 log.go:172] (0xc002ebf4a0) (0xc0003ea8c0) Stream added, broadcasting: 3 I0507 00:32:02.569710 7 log.go:172] (0xc002ebf4a0) Reply frame received for 3 I0507 00:32:02.569743 7 log.go:172] (0xc002ebf4a0) (0xc000dd2fa0) Create stream I0507 00:32:02.569764 7 log.go:172] (0xc002ebf4a0) (0xc000dd2fa0) Stream added, broadcasting: 5 I0507 00:32:02.570609 7 log.go:172] (0xc002ebf4a0) Reply frame received for 5 I0507 00:32:02.657012 7 log.go:172] (0xc002ebf4a0) Data frame received for 3 I0507 00:32:02.657048 7 log.go:172] (0xc0003ea8c0) (3) Data frame handling I0507 00:32:02.657084 7 log.go:172] (0xc0003ea8c0) (3) Data frame sent I0507 00:32:02.657313 7 log.go:172] (0xc002ebf4a0) Data frame received for 3 I0507 00:32:02.657397 7 log.go:172] (0xc0003ea8c0) (3) Data frame handling I0507 00:32:02.657625 7 log.go:172] (0xc002ebf4a0) Data frame received for 5 I0507 00:32:02.657659 7 log.go:172] (0xc000dd2fa0) (5) Data frame handling I0507 00:32:02.659006 7 log.go:172] (0xc002ebf4a0) Data frame received for 1 I0507 00:32:02.659040 7 log.go:172] (0xc00080e3c0) (1) Data frame handling I0507 00:32:02.659065 7 log.go:172] (0xc00080e3c0) (1) Data frame sent I0507 00:32:02.659096 7 log.go:172] (0xc002ebf4a0) (0xc00080e3c0) Stream removed, broadcasting: 1 I0507 00:32:02.659121 7 log.go:172] (0xc002ebf4a0) Go away received I0507 00:32:02.659337 7 log.go:172] (0xc002ebf4a0) (0xc00080e3c0) Stream removed, broadcasting: 1 I0507 00:32:02.659369 7 log.go:172] (0xc002ebf4a0) (0xc0003ea8c0) Stream removed, broadcasting: 3 I0507 00:32:02.659384 7 log.go:172] (0xc002ebf4a0) (0xc000dd2fa0) Stream removed, broadcasting: 5 May 7 00:32:02.659: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:32:02.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9062" for this suite. • [SLOW TEST:6.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":108,"skipped":1701,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:32:02.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:32:02.775: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 7 00:32:07.781: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 7 00:32:07.781: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 7 00:32:09.785: INFO: Creating deployment "test-rollover-deployment" May 7 00:32:09.836: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 7 00:32:11.842: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 7 00:32:11.849: INFO: Ensure that both replica sets have 1 created replica May 7 00:32:11.854: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 7 00:32:11.861: INFO: Updating deployment test-rollover-deployment May 7 00:32:11.861: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 7 00:32:13.883: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 7 00:32:13.891: INFO: Make sure deployment "test-rollover-deployment" is complete May 7 00:32:13.898: INFO: all replica sets need to contain the pod-template-hash label May 7 00:32:13.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408332, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:32:16.034: INFO: all replica sets need to contain the pod-template-hash label May 7 00:32:16.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408335, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:32:17.907: INFO: all replica sets need to contain the pod-template-hash label May 7 00:32:17.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408335, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:32:19.907: INFO: all replica sets need to contain the pod-template-hash label May 7 00:32:19.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408335, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:32:21.906: INFO: all replica sets need to contain the pod-template-hash label May 7 00:32:21.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408335, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:32:23.905: INFO: all replica sets need to contain the pod-template-hash label May 7 00:32:23.905: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408335, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:32:26.018: INFO: May 7 00:32:26.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408345, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724408330, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:32:27.905: INFO: May 7 00:32:27.905: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 7 00:32:27.911: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5501 /apis/apps/v1/namespaces/deployment-5501/deployments/test-rollover-deployment 0e31459d-1e60-42e2-8d2e-9ec6b39f53a4 2170456 2 2020-05-07 00:32:09 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-07 00:32:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-07 00:32:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c8e1c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-07 00:32:10 +0000 UTC,LastTransitionTime:2020-05-07 00:32:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-07 00:32:26 +0000 UTC,LastTransitionTime:2020-05-07 00:32:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 7 00:32:27.914: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-5501 /apis/apps/v1/namespaces/deployment-5501/replicasets/test-rollover-deployment-7c4fd9c879 9d6c6b5e-c35b-45e1-bfed-9c8350231540 2170445 2 2020-05-07 00:32:11 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 0e31459d-1e60-42e2-8d2e-9ec6b39f53a4 0xc004c8ea17 0xc004c8ea18}] [] [{kube-controller-manager Update apps/v1 2020-05-07 00:32:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e31459d-1e60-42e2-8d2e-9ec6b39f53a4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c8eb28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 7 00:32:27.914: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 7 00:32:27.914: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5501 /apis/apps/v1/namespaces/deployment-5501/replicasets/test-rollover-controller f7b8bb4c-556e-4c57-961e-f014a2bf95b2 2170455 2 2020-05-07 00:32:02 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 0e31459d-1e60-42e2-8d2e-9ec6b39f53a4 0xc004c8e72f 0xc004c8e740}] [] [{e2e.test Update apps/v1 2020-05-07 00:32:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-07 00:32:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e31459d-1e60-42e2-8d2e-9ec6b39f53a4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004c8e838 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 7 00:32:27.914: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-5501 /apis/apps/v1/namespaces/deployment-5501/replicasets/test-rollover-deployment-5686c4cfd5 bd9fa936-09b0-49fb-80d3-2518571b6dd9 2170393 2 2020-05-07 00:32:09 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 0e31459d-1e60-42e2-8d2e-9ec6b39f53a4 0xc004c8e8e7 0xc004c8e8e8}] [] [{kube-controller-manager Update apps/v1 2020-05-07 00:32:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e31459d-1e60-42e2-8d2e-9ec6b39f53a4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c8e978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 7 00:32:27.917: INFO: Pod "test-rollover-deployment-7c4fd9c879-r4c4n" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-r4c4n test-rollover-deployment-7c4fd9c879- deployment-5501 /api/v1/namespaces/deployment-5501/pods/test-rollover-deployment-7c4fd9c879-r4c4n 40c13ec4-86fb-4f80-ae40-a7700367db7c 2170413 0 2020-05-07 00:32:11 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 9d6c6b5e-c35b-45e1-bfed-9c8350231540 0xc004c8f3c7 0xc004c8f3c8}] [] [{kube-controller-manager Update v1 2020-05-07 00:32:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9d6c6b5e-c35b-45e1-bfed-9c8350231540\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:32:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dcv54,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dcv54,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dcv54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:32:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:32:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.91,StartTime:2020-05-07 00:32:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-07 00:32:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://3b3e5483128a5d39557bc9050c174b27cedb50d123f9e38f927cb73d9609bd27,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:32:27.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5501" for this suite. • [SLOW TEST:25.256 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":109,"skipped":1704,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:32:27.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-2239 STEP: creating replication controller nodeport-test in namespace services-2239 I0507 00:32:28.261932 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2239, replica count: 2 I0507 00:32:31.312339 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:32:34.312550 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:32:37.312802 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 00:32:37.312: INFO: Creating new exec pod May 7 00:32:42.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2239 execpodrdgzq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 7 00:32:42.575: INFO: stderr: "I0507 00:32:42.477920 1870 log.go:172] (0xc00003a0b0) (0xc00050c280) Create stream\nI0507 00:32:42.477983 1870 log.go:172] (0xc00003a0b0) (0xc00050c280) Stream added, broadcasting: 1\nI0507 00:32:42.479312 1870 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0507 00:32:42.479344 1870 log.go:172] (0xc00003a0b0) (0xc000856f00) Create stream\nI0507 00:32:42.479356 1870 log.go:172] (0xc00003a0b0) (0xc000856f00) Stream added, broadcasting: 3\nI0507 00:32:42.480043 1870 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0507 00:32:42.480074 1870 log.go:172] (0xc00003a0b0) (0xc000a78000) Create stream\nI0507 00:32:42.480085 1870 log.go:172] (0xc00003a0b0) (0xc000a78000) Stream added, broadcasting: 5\nI0507 00:32:42.480698 1870 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0507 00:32:42.566791 1870 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0507 00:32:42.566815 1870 log.go:172] (0xc000a78000) (5) Data frame handling\nI0507 00:32:42.566828 1870 log.go:172] (0xc000a78000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0507 00:32:42.567487 1870 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0507 00:32:42.567507 1870 log.go:172] (0xc000a78000) (5) Data frame handling\nI0507 00:32:42.567518 1870 log.go:172] (0xc000a78000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0507 00:32:42.567876 1870 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0507 00:32:42.567908 1870 log.go:172] (0xc000a78000) (5) Data frame handling\nI0507 00:32:42.567930 1870 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0507 00:32:42.567949 1870 log.go:172] (0xc000856f00) (3) Data frame handling\nI0507 00:32:42.569873 1870 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0507 00:32:42.569902 1870 log.go:172] (0xc00050c280) (1) Data frame handling\nI0507 00:32:42.569933 1870 log.go:172] (0xc00050c280) (1) Data frame sent\nI0507 00:32:42.570040 1870 log.go:172] (0xc00003a0b0) (0xc00050c280) Stream removed, broadcasting: 1\nI0507 00:32:42.570085 1870 log.go:172] (0xc00003a0b0) Go away received\nI0507 00:32:42.570348 1870 log.go:172] (0xc00003a0b0) (0xc00050c280) Stream removed, broadcasting: 1\nI0507 00:32:42.570367 1870 log.go:172] (0xc00003a0b0) (0xc000856f00) Stream removed, broadcasting: 3\nI0507 00:32:42.570381 1870 log.go:172] (0xc00003a0b0) (0xc000a78000) Stream removed, broadcasting: 5\n" May 7 00:32:42.575: INFO: stdout: "" May 7 00:32:42.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2239 execpodrdgzq -- /bin/sh -x -c nc -zv -t -w 2 10.109.181.102 80' May 7 00:32:42.763: INFO: stderr: "I0507 00:32:42.700464 1892 log.go:172] (0xc0009d7e40) (0xc0006cd040) Create stream\nI0507 00:32:42.700521 1892 log.go:172] (0xc0009d7e40) (0xc0006cd040) Stream added, broadcasting: 1\nI0507 00:32:42.704755 1892 log.go:172] (0xc0009d7e40) Reply frame received for 1\nI0507 00:32:42.704798 1892 log.go:172] (0xc0009d7e40) (0xc0006adcc0) Create stream\nI0507 00:32:42.704812 1892 log.go:172] (0xc0009d7e40) (0xc0006adcc0) Stream added, broadcasting: 3\nI0507 00:32:42.705936 1892 log.go:172] (0xc0009d7e40) Reply frame received for 3\nI0507 00:32:42.705964 1892 log.go:172] (0xc0009d7e40) (0xc0006a2dc0) Create stream\nI0507 00:32:42.705973 1892 log.go:172] (0xc0009d7e40) (0xc0006a2dc0) Stream added, broadcasting: 5\nI0507 00:32:42.706817 1892 log.go:172] (0xc0009d7e40) Reply frame received for 5\nI0507 00:32:42.756554 1892 log.go:172] (0xc0009d7e40) Data frame received for 3\nI0507 00:32:42.756605 1892 log.go:172] (0xc0006adcc0) (3) Data frame handling\nI0507 00:32:42.756643 1892 log.go:172] (0xc0009d7e40) Data frame received for 5\nI0507 00:32:42.756667 1892 log.go:172] (0xc0006a2dc0) (5) Data frame handling\nI0507 00:32:42.756679 1892 log.go:172] (0xc0006a2dc0) (5) Data frame sent\nI0507 00:32:42.756688 1892 log.go:172] (0xc0009d7e40) Data frame received for 5\nI0507 00:32:42.756695 1892 log.go:172] (0xc0006a2dc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.181.102 80\nConnection to 10.109.181.102 80 port [tcp/http] succeeded!\nI0507 00:32:42.758061 1892 log.go:172] (0xc0009d7e40) Data frame received for 1\nI0507 00:32:42.758085 1892 log.go:172] (0xc0006cd040) (1) Data frame handling\nI0507 00:32:42.758102 1892 log.go:172] (0xc0006cd040) (1) Data frame sent\nI0507 00:32:42.758127 1892 log.go:172] (0xc0009d7e40) (0xc0006cd040) Stream removed, broadcasting: 1\nI0507 00:32:42.758151 1892 log.go:172] (0xc0009d7e40) Go away received\nI0507 00:32:42.758433 1892 log.go:172] (0xc0009d7e40) (0xc0006cd040) Stream removed, broadcasting: 1\nI0507 00:32:42.758464 1892 log.go:172] (0xc0009d7e40) (0xc0006adcc0) Stream removed, broadcasting: 3\nI0507 00:32:42.758477 1892 log.go:172] (0xc0009d7e40) (0xc0006a2dc0) Stream removed, broadcasting: 5\n" May 7 00:32:42.763: INFO: stdout: "" May 7 00:32:42.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2239 execpodrdgzq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32322' May 7 00:32:42.987: INFO: stderr: "I0507 00:32:42.898716 1915 log.go:172] (0xc000758000) (0xc00023a8c0) Create stream\nI0507 00:32:42.898841 1915 log.go:172] (0xc000758000) (0xc00023a8c0) Stream added, broadcasting: 1\nI0507 00:32:42.902259 1915 log.go:172] (0xc000758000) Reply frame received for 1\nI0507 00:32:42.902295 1915 log.go:172] (0xc000758000) (0xc0008b9860) Create stream\nI0507 00:32:42.902305 1915 log.go:172] (0xc000758000) (0xc0008b9860) Stream added, broadcasting: 3\nI0507 00:32:42.903177 1915 log.go:172] (0xc000758000) Reply frame received for 3\nI0507 00:32:42.903209 1915 log.go:172] (0xc000758000) (0xc0002343c0) Create stream\nI0507 00:32:42.903250 1915 log.go:172] (0xc000758000) (0xc0002343c0) Stream added, broadcasting: 5\nI0507 00:32:42.904332 1915 log.go:172] (0xc000758000) Reply frame received for 5\nI0507 00:32:42.980601 1915 log.go:172] (0xc000758000) Data frame received for 3\nI0507 00:32:42.980667 1915 log.go:172] (0xc0008b9860) (3) Data frame handling\nI0507 00:32:42.980706 1915 log.go:172] (0xc000758000) Data frame received for 5\nI0507 00:32:42.980728 1915 log.go:172] (0xc0002343c0) (5) Data frame handling\nI0507 00:32:42.980759 1915 log.go:172] (0xc0002343c0) (5) Data frame sent\nI0507 00:32:42.980786 1915 log.go:172] (0xc000758000) Data frame received for 5\nI0507 00:32:42.980804 1915 log.go:172] (0xc0002343c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32322\nConnection to 172.17.0.13 32322 port [tcp/32322] succeeded!\nI0507 00:32:42.982065 1915 log.go:172] (0xc000758000) Data frame received for 1\nI0507 00:32:42.982093 1915 log.go:172] (0xc00023a8c0) (1) Data frame handling\nI0507 00:32:42.982120 1915 log.go:172] (0xc00023a8c0) (1) Data frame sent\nI0507 00:32:42.982290 1915 log.go:172] (0xc000758000) (0xc00023a8c0) Stream removed, broadcasting: 1\nI0507 00:32:42.982375 1915 log.go:172] (0xc000758000) Go away received\nI0507 00:32:42.982684 1915 log.go:172] (0xc000758000) (0xc00023a8c0) Stream removed, broadcasting: 1\nI0507 00:32:42.982701 1915 log.go:172] (0xc000758000) (0xc0008b9860) Stream removed, broadcasting: 3\nI0507 00:32:42.982710 1915 log.go:172] (0xc000758000) (0xc0002343c0) Stream removed, broadcasting: 5\n" May 7 00:32:42.987: INFO: stdout: "" May 7 00:32:42.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2239 execpodrdgzq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32322' May 7 00:32:43.188: INFO: stderr: "I0507 00:32:43.124264 1935 log.go:172] (0xc000bd71e0) (0xc000af6640) Create stream\nI0507 00:32:43.124312 1935 log.go:172] (0xc000bd71e0) (0xc000af6640) Stream added, broadcasting: 1\nI0507 00:32:43.129070 1935 log.go:172] (0xc000bd71e0) Reply frame received for 1\nI0507 00:32:43.129108 1935 log.go:172] (0xc000bd71e0) (0xc000538dc0) Create stream\nI0507 00:32:43.129252 1935 log.go:172] (0xc000bd71e0) (0xc000538dc0) Stream added, broadcasting: 3\nI0507 00:32:43.130233 1935 log.go:172] (0xc000bd71e0) Reply frame received for 3\nI0507 00:32:43.130278 1935 log.go:172] (0xc000bd71e0) (0xc000159720) Create stream\nI0507 00:32:43.130293 1935 log.go:172] (0xc000bd71e0) (0xc000159720) Stream added, broadcasting: 5\nI0507 00:32:43.131513 1935 log.go:172] (0xc000bd71e0) Reply frame received for 5\nI0507 00:32:43.183705 1935 log.go:172] (0xc000bd71e0) Data frame received for 5\nI0507 00:32:43.183749 1935 log.go:172] (0xc000159720) (5) Data frame handling\nI0507 00:32:43.183765 1935 log.go:172] (0xc000159720) (5) Data frame sent\nI0507 00:32:43.183774 1935 log.go:172] (0xc000bd71e0) Data frame received for 5\nI0507 00:32:43.183782 1935 log.go:172] (0xc000159720) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32322\nConnection to 172.17.0.12 32322 port [tcp/32322] succeeded!\nI0507 00:32:43.183807 1935 log.go:172] (0xc000bd71e0) Data frame received for 3\nI0507 00:32:43.183819 1935 log.go:172] (0xc000538dc0) (3) Data frame handling\nI0507 00:32:43.184800 1935 log.go:172] (0xc000bd71e0) Data frame received for 1\nI0507 00:32:43.184816 1935 log.go:172] (0xc000af6640) (1) Data frame handling\nI0507 00:32:43.184829 1935 log.go:172] (0xc000af6640) (1) Data frame sent\nI0507 00:32:43.184845 1935 log.go:172] (0xc000bd71e0) (0xc000af6640) Stream removed, broadcasting: 1\nI0507 00:32:43.184985 1935 log.go:172] (0xc000bd71e0) Go away received\nI0507 00:32:43.185320 1935 log.go:172] (0xc000bd71e0) (0xc000af6640) Stream removed, broadcasting: 1\nI0507 00:32:43.185341 1935 log.go:172] (0xc000bd71e0) (0xc000538dc0) Stream removed, broadcasting: 3\nI0507 00:32:43.185355 1935 log.go:172] (0xc000bd71e0) (0xc000159720) Stream removed, broadcasting: 5\n" May 7 00:32:43.188: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:32:43.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2239" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:15.343 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":110,"skipped":1713,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:32:43.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-93dbf391-9107-4a5d-be17-103d72262a5f STEP: Creating a pod to test consume configMaps May 7 00:32:43.424: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1c6ee009-2b4a-4d2b-8c89-440c49094c83" in namespace "projected-1228" to be "Succeeded or Failed" May 7 00:32:43.439: INFO: Pod "pod-projected-configmaps-1c6ee009-2b4a-4d2b-8c89-440c49094c83": Phase="Pending", Reason="", readiness=false. Elapsed: 15.215572ms May 7 00:32:45.722: INFO: Pod "pod-projected-configmaps-1c6ee009-2b4a-4d2b-8c89-440c49094c83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297968285s May 7 00:32:47.727: INFO: Pod "pod-projected-configmaps-1c6ee009-2b4a-4d2b-8c89-440c49094c83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302503302s May 7 00:32:49.782: INFO: Pod "pod-projected-configmaps-1c6ee009-2b4a-4d2b-8c89-440c49094c83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.358068567s STEP: Saw pod success May 7 00:32:49.782: INFO: Pod "pod-projected-configmaps-1c6ee009-2b4a-4d2b-8c89-440c49094c83" satisfied condition "Succeeded or Failed" May 7 00:32:49.786: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-1c6ee009-2b4a-4d2b-8c89-440c49094c83 container projected-configmap-volume-test: STEP: delete the pod May 7 00:32:49.987: INFO: Waiting for pod pod-projected-configmaps-1c6ee009-2b4a-4d2b-8c89-440c49094c83 to disappear May 7 00:32:50.035: INFO: Pod pod-projected-configmaps-1c6ee009-2b4a-4d2b-8c89-440c49094c83 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:32:50.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1228" for this suite. • [SLOW TEST:6.806 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":111,"skipped":1727,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:32:50.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:32:50.225: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e5321b64-7e14-486e-b92b-48d9ab44823b" in namespace "security-context-test-6428" to be "Succeeded or Failed" May 7 00:32:50.239: INFO: Pod "busybox-privileged-false-e5321b64-7e14-486e-b92b-48d9ab44823b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.66117ms May 7 00:32:52.351: INFO: Pod "busybox-privileged-false-e5321b64-7e14-486e-b92b-48d9ab44823b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126148818s May 7 00:32:54.354: INFO: Pod "busybox-privileged-false-e5321b64-7e14-486e-b92b-48d9ab44823b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129296964s May 7 00:32:56.359: INFO: Pod "busybox-privileged-false-e5321b64-7e14-486e-b92b-48d9ab44823b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133747459s May 7 00:32:56.359: INFO: Pod "busybox-privileged-false-e5321b64-7e14-486e-b92b-48d9ab44823b" satisfied condition "Succeeded or Failed" May 7 00:32:56.365: INFO: Got logs for pod "busybox-privileged-false-e5321b64-7e14-486e-b92b-48d9ab44823b": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:32:56.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6428" for this suite. • [SLOW TEST:6.299 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":112,"skipped":1743,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:32:56.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 7 00:33:01.139: INFO: Successfully updated pod "annotationupdateec2d5222-13f2-45f7-bee4-5c86d8848a3e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:33:05.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2528" for this suite. • [SLOW TEST:8.812 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":113,"skipped":1745,"failed":0} [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:33:05.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 7 00:33:05.274: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 7 00:33:05.280: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 7 00:33:05.281: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 7 00:33:05.303: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 7 00:33:05.303: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 7 00:33:05.348: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 7 00:33:05.348: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 7 00:33:13.201: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:33:13.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-1586" for this suite. • [SLOW TEST:8.269 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":114,"skipped":1745,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:33:13.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:33:13.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 7 00:33:14.177: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:33:14Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-07T00:33:14Z]] name:name1 resourceVersion:2170820 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d9996255-38a1-466d-a416-7f343fdb91fc] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 7 00:33:24.184: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:33:24Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-07T00:33:24Z]] name:name2 resourceVersion:2170875 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:48afe2cc-f51b-490f-9321-d5d02ade004a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 7 00:33:34.192: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:33:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-07T00:33:34Z]] name:name1 resourceVersion:2170914 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d9996255-38a1-466d-a416-7f343fdb91fc] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 7 00:33:44.200: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:33:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-07T00:33:44Z]] name:name2 resourceVersion:2170948 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:48afe2cc-f51b-490f-9321-d5d02ade004a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 7 00:33:54.207: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:33:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-07T00:33:34Z]] name:name1 resourceVersion:2170979 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d9996255-38a1-466d-a416-7f343fdb91fc] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 7 00:34:04.215: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:33:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-07T00:33:44Z]] name:name2 resourceVersion:2171007 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:48afe2cc-f51b-490f-9321-d5d02ade004a] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:34:14.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4933" for this suite. • [SLOW TEST:61.279 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":115,"skipped":1747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:34:14.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-qt494 in namespace proxy-376 I0507 00:34:14.930489 7 runners.go:190] Created replication controller with name: proxy-service-qt494, namespace: proxy-376, replica count: 1 I0507 00:34:15.983366 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:34:16.983579 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:34:17.983823 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:34:18.984005 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:34:19.984278 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:34:20.984469 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:34:21.984671 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:34:22.984900 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:34:23.985260 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:34:24.985499 7 runners.go:190] proxy-service-qt494 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 00:34:24.989: INFO: setup took 10.135677551s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 7 00:34:24.994: INFO: (0) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj/proxy/: test (200; 4.645586ms) May 7 00:34:25.002: INFO: (0) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 12.520354ms) May 7 00:34:25.002: INFO: (0) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 12.270835ms) May 7 00:34:25.002: INFO: (0) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 12.129734ms) May 7 00:34:25.002: INFO: (0) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname1/proxy/: foo (200; 12.472287ms) May 7 00:34:25.002: INFO: (0) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testt... (200; 12.487273ms) May 7 00:34:25.004: INFO: (0) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 14.696672ms) May 7 00:34:25.004: INFO: (0) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname2/proxy/: bar (200; 14.779265ms) May 7 00:34:25.004: INFO: (0) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 14.92808ms) May 7 00:34:25.005: INFO: (0) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 16.049676ms) May 7 00:34:25.008: INFO: (0) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname1/proxy/: tls baz (200; 18.447231ms) May 7 00:34:25.008: INFO: (0) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 18.102934ms) May 7 00:34:25.008: INFO: (0) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:460/proxy/: tls baz (200; 18.628286ms) May 7 00:34:25.008: INFO: (0) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: testtest (200; 6.871178ms) May 7 00:34:25.015: INFO: (1) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 6.910828ms) May 7 00:34:25.015: INFO: (1) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 6.921717ms) May 7 00:34:25.015: INFO: (1) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname1/proxy/: tls baz (200; 7.033509ms) May 7 00:34:25.015: INFO: (1) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 7.125055ms) May 7 00:34:25.015: INFO: (1) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 7.168858ms) May 7 00:34:25.015: INFO: (1) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname1/proxy/: foo (200; 7.209122ms) May 7 00:34:25.019: INFO: (2) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 3.175195ms) May 7 00:34:25.019: INFO: (2) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj/proxy/: test (200; 3.213557ms) May 7 00:34:25.022: INFO: (2) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testt... (200; 10.113968ms) May 7 00:34:25.025: INFO: (2) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 10.165261ms) May 7 00:34:25.026: INFO: (2) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 10.218534ms) May 7 00:34:25.031: INFO: (3) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 4.812181ms) May 7 00:34:25.031: INFO: (3) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 5.042611ms) May 7 00:34:25.031: INFO: (3) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj/proxy/: test (200; 5.107136ms) May 7 00:34:25.031: INFO: (3) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 5.076826ms) May 7 00:34:25.031: INFO: (3) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 5.061358ms) May 7 00:34:25.031: INFO: (3) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 5.187929ms) May 7 00:34:25.032: INFO: (3) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 5.943749ms) May 7 00:34:25.032: INFO: (3) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: testtest (200; 4.308867ms) May 7 00:34:25.037: INFO: (4) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testt... (200; 4.822426ms) May 7 00:34:25.037: INFO: (4) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 4.8223ms) May 7 00:34:25.037: INFO: (4) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 4.976519ms) May 7 00:34:25.038: INFO: (4) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 5.052753ms) May 7 00:34:25.041: INFO: (5) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 3.839182ms) May 7 00:34:25.041: INFO: (5) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 3.784984ms) May 7 00:34:25.041: INFO: (5) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 3.843413ms) May 7 00:34:25.041: INFO: (5) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 3.891698ms) May 7 00:34:25.042: INFO: (5) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname2/proxy/: bar (200; 4.127493ms) May 7 00:34:25.042: INFO: (5) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:460/proxy/: tls baz (200; 4.273298ms) May 7 00:34:25.042: INFO: (5) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname1/proxy/: foo (200; 4.413661ms) May 7 00:34:25.042: INFO: (5) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname1/proxy/: tls baz (200; 4.683182ms) May 7 00:34:25.042: INFO: (5) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 4.754834ms) May 7 00:34:25.042: INFO: (5) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testtest (200; 4.794881ms) May 7 00:34:25.043: INFO: (5) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 4.914766ms) May 7 00:34:25.043: INFO: (5) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 4.898097ms) May 7 00:34:25.045: INFO: (5) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 7.888333ms) May 7 00:34:25.046: INFO: (5) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: test (200; 4.702745ms) May 7 00:34:25.050: INFO: (6) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 4.681465ms) May 7 00:34:25.050: INFO: (6) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:460/proxy/: tls baz (200; 4.649576ms) May 7 00:34:25.050: INFO: (6) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testtesttest (200; 6.063949ms) May 7 00:34:25.058: INFO: (7) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:460/proxy/: tls baz (200; 5.993603ms) May 7 00:34:25.058: INFO: (7) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 5.992145ms) May 7 00:34:25.058: INFO: (7) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: t... (200; 6.617155ms) May 7 00:34:25.059: INFO: (7) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 6.691388ms) May 7 00:34:25.059: INFO: (7) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 6.745029ms) May 7 00:34:25.064: INFO: (8) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 4.799781ms) May 7 00:34:25.064: INFO: (8) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 5.165671ms) May 7 00:34:25.064: INFO: (8) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 5.282975ms) May 7 00:34:25.064: INFO: (8) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: t... (200; 5.630665ms) May 7 00:34:25.065: INFO: (8) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 5.583671ms) May 7 00:34:25.065: INFO: (8) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:460/proxy/: tls baz (200; 5.937609ms) May 7 00:34:25.065: INFO: (8) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testtest (200; 6.062033ms) May 7 00:34:25.066: INFO: (8) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 6.654518ms) May 7 00:34:25.066: INFO: (8) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname2/proxy/: bar (200; 6.606134ms) May 7 00:34:25.066: INFO: (8) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname1/proxy/: foo (200; 6.674709ms) May 7 00:34:25.066: INFO: (8) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 6.891834ms) May 7 00:34:25.066: INFO: (8) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 7.174583ms) May 7 00:34:25.069: INFO: (9) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 3.257063ms) May 7 00:34:25.072: INFO: (9) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 6.056916ms) May 7 00:34:25.072: INFO: (9) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 6.117547ms) May 7 00:34:25.072: INFO: (9) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 6.173645ms) May 7 00:34:25.072: INFO: (9) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 6.126523ms) May 7 00:34:25.072: INFO: (9) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname1/proxy/: tls baz (200; 6.111205ms) May 7 00:34:25.072: INFO: (9) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname2/proxy/: bar (200; 6.142098ms) May 7 00:34:25.073: INFO: (9) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:460/proxy/: tls baz (200; 6.598091ms) May 7 00:34:25.073: INFO: (9) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testt... (200; 6.871654ms) May 7 00:34:25.073: INFO: (9) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 6.810737ms) May 7 00:34:25.073: INFO: (9) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 6.838713ms) May 7 00:34:25.073: INFO: (9) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj/proxy/: test (200; 6.843577ms) May 7 00:34:25.073: INFO: (9) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: test (200; 4.346322ms) May 7 00:34:25.078: INFO: (10) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 4.384962ms) May 7 00:34:25.078: INFO: (10) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:460/proxy/: tls baz (200; 4.751006ms) May 7 00:34:25.078: INFO: (10) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: testtest (200; 6.172561ms) May 7 00:34:25.088: INFO: (11) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 6.278179ms) May 7 00:34:25.088: INFO: (11) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 6.172814ms) May 7 00:34:25.088: INFO: (11) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 6.268455ms) May 7 00:34:25.088: INFO: (11) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname1/proxy/: foo (200; 6.22731ms) May 7 00:34:25.088: INFO: (11) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 6.263028ms) May 7 00:34:25.089: INFO: (11) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 7.079643ms) May 7 00:34:25.089: INFO: (11) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 7.095831ms) May 7 00:34:25.089: INFO: (11) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testtest (200; 7.256858ms) May 7 00:34:25.097: INFO: (12) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 7.26948ms) May 7 00:34:25.097: INFO: (12) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 7.320312ms) May 7 00:34:25.097: INFO: (12) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 7.302341ms) May 7 00:34:25.097: INFO: (12) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 7.362165ms) May 7 00:34:25.097: INFO: (12) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testtesttest (200; 6.00373ms) May 7 00:34:25.103: INFO: (13) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 5.995346ms) May 7 00:34:25.103: INFO: (13) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname2/proxy/: bar (200; 6.233506ms) May 7 00:34:25.103: INFO: (13) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 6.283793ms) May 7 00:34:25.104: INFO: (13) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 6.463954ms) May 7 00:34:25.104: INFO: (13) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 6.495794ms) May 7 00:34:25.104: INFO: (13) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname1/proxy/: tls baz (200; 6.613691ms) May 7 00:34:25.108: INFO: (14) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 4.201675ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname1/proxy/: foo (200; 5.466036ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 5.439291ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 5.464937ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 5.545264ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 5.479392ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 5.574977ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testtest (200; 5.548683ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname2/proxy/: bar (200; 5.60806ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname1/proxy/: tls baz (200; 5.688104ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 5.615143ms) May 7 00:34:25.109: INFO: (14) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: t... (200; 3.090855ms) May 7 00:34:25.113: INFO: (15) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 3.106422ms) May 7 00:34:25.114: INFO: (15) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj/proxy/: test (200; 4.464672ms) May 7 00:34:25.115: INFO: (15) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 4.80043ms) May 7 00:34:25.115: INFO: (15) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testt... (200; 4.201853ms) May 7 00:34:25.121: INFO: (16) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 4.213926ms) May 7 00:34:25.121: INFO: (16) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 4.230418ms) May 7 00:34:25.127: INFO: (16) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: testtest (200; 10.800595ms) May 7 00:34:25.128: INFO: (16) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 10.987036ms) May 7 00:34:25.128: INFO: (16) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname1/proxy/: foo (200; 11.058306ms) May 7 00:34:25.128: INFO: (16) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 11.339466ms) May 7 00:34:25.128: INFO: (16) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 11.324663ms) May 7 00:34:25.128: INFO: (16) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 11.477155ms) May 7 00:34:25.132: INFO: (17) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testt... (200; 3.760984ms) May 7 00:34:25.132: INFO: (17) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:160/proxy/: foo (200; 3.779404ms) May 7 00:34:25.133: INFO: (17) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:443/proxy/: test (200; 5.150174ms) May 7 00:34:25.135: INFO: (17) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 6.028831ms) May 7 00:34:25.135: INFO: (17) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:460/proxy/: tls baz (200; 5.72253ms) May 7 00:34:25.135: INFO: (17) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 6.042159ms) May 7 00:34:25.135: INFO: (17) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname1/proxy/: foo (200; 6.200717ms) May 7 00:34:25.135: INFO: (17) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname2/proxy/: bar (200; 6.389491ms) May 7 00:34:25.135: INFO: (17) /api/v1/namespaces/proxy-376/services/http:proxy-service-qt494:portname2/proxy/: bar (200; 6.429277ms) May 7 00:34:25.135: INFO: (17) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 6.416521ms) May 7 00:34:25.135: INFO: (17) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 6.734781ms) May 7 00:34:25.145: INFO: (18) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:160/proxy/: foo (200; 9.654132ms) May 7 00:34:25.145: INFO: (18) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:162/proxy/: bar (200; 9.679656ms) May 7 00:34:25.146: INFO: (18) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj/proxy/: test (200; 10.135303ms) May 7 00:34:25.146: INFO: (18) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:1080/proxy/: t... (200; 10.16722ms) May 7 00:34:25.146: INFO: (18) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testt... (200; 5.525455ms) May 7 00:34:25.181: INFO: (19) /api/v1/namespaces/proxy-376/pods/https:proxy-service-qt494-mqrfj:462/proxy/: tls qux (200; 5.497315ms) May 7 00:34:25.181: INFO: (19) /api/v1/namespaces/proxy-376/pods/http:proxy-service-qt494-mqrfj:162/proxy/: bar (200; 5.48407ms) May 7 00:34:25.181: INFO: (19) /api/v1/namespaces/proxy-376/pods/proxy-service-qt494-mqrfj:1080/proxy/: testtest (200; 5.536189ms) May 7 00:34:25.181: INFO: (19) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname2/proxy/: tls qux (200; 5.628774ms) May 7 00:34:25.182: INFO: (19) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname1/proxy/: foo (200; 5.971345ms) May 7 00:34:25.182: INFO: (19) /api/v1/namespaces/proxy-376/services/proxy-service-qt494:portname2/proxy/: bar (200; 6.103362ms) May 7 00:34:25.182: INFO: (19) /api/v1/namespaces/proxy-376/services/https:proxy-service-qt494:tlsportname1/proxy/: tls baz (200; 6.091688ms) STEP: deleting ReplicationController proxy-service-qt494 in namespace proxy-376, will wait for the garbage collector to delete the pods May 7 00:34:25.239: INFO: Deleting ReplicationController proxy-service-qt494 took: 5.229202ms May 7 00:34:25.539: INFO: Terminating ReplicationController proxy-service-qt494 pods took: 300.512501ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:34:35.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-376" for this suite. • [SLOW TEST:20.613 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":116,"skipped":1774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:34:35.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9aa1e351-cc9e-4eef-8cd9-c0891cc33a85 STEP: Creating a pod to test consume configMaps May 7 00:34:35.599: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-42ee3d0c-862d-4e7c-a463-ce318b424986" in namespace "projected-962" to be "Succeeded or Failed" May 7 00:34:35.614: INFO: Pod "pod-projected-configmaps-42ee3d0c-862d-4e7c-a463-ce318b424986": Phase="Pending", Reason="", readiness=false. Elapsed: 14.729225ms May 7 00:34:37.788: INFO: Pod "pod-projected-configmaps-42ee3d0c-862d-4e7c-a463-ce318b424986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189441069s May 7 00:34:39.793: INFO: Pod "pod-projected-configmaps-42ee3d0c-862d-4e7c-a463-ce318b424986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.194381732s STEP: Saw pod success May 7 00:34:39.793: INFO: Pod "pod-projected-configmaps-42ee3d0c-862d-4e7c-a463-ce318b424986" satisfied condition "Succeeded or Failed" May 7 00:34:39.796: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-42ee3d0c-862d-4e7c-a463-ce318b424986 container projected-configmap-volume-test: STEP: delete the pod May 7 00:34:39.857: INFO: Waiting for pod pod-projected-configmaps-42ee3d0c-862d-4e7c-a463-ce318b424986 to disappear May 7 00:34:39.868: INFO: Pod pod-projected-configmaps-42ee3d0c-862d-4e7c-a463-ce318b424986 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:34:39.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-962" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":117,"skipped":1822,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:34:39.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 7 00:34:53.089: INFO: 5 pods remaining May 7 00:34:53.089: INFO: 5 pods has nil DeletionTimestamp May 7 00:34:53.089: INFO: STEP: Gathering metrics W0507 00:34:57.253696 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 00:34:57.253: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:34:57.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4386" for this suite. • [SLOW TEST:17.373 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":118,"skipped":1833,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:34:57.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:34:57.380: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 7 00:34:57.393: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:34:57.398: INFO: Number of nodes with available pods: 0 May 7 00:34:57.398: INFO: Node latest-worker is running more than one daemon pod May 7 00:34:58.403: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:34:58.407: INFO: Number of nodes with available pods: 0 May 7 00:34:58.407: INFO: Node latest-worker is running more than one daemon pod May 7 00:34:59.404: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:34:59.408: INFO: Number of nodes with available pods: 0 May 7 00:34:59.408: INFO: Node latest-worker is running more than one daemon pod May 7 00:35:00.404: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:00.408: INFO: Number of nodes with available pods: 0 May 7 00:35:00.408: INFO: Node latest-worker is running more than one daemon pod May 7 00:35:01.404: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:01.407: INFO: Number of nodes with available pods: 1 May 7 00:35:01.407: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:35:03.516: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:04.700: INFO: Number of nodes with available pods: 2 May 7 00:35:04.700: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 7 00:35:05.542: INFO: Wrong image for pod: daemon-set-87p8v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:05.542: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:06.035: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:07.119: INFO: Wrong image for pod: daemon-set-87p8v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:07.119: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:07.149: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:08.203: INFO: Wrong image for pod: daemon-set-87p8v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:08.203: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:08.214: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:09.131: INFO: Wrong image for pod: daemon-set-87p8v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:09.131: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:09.143: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:10.058: INFO: Wrong image for pod: daemon-set-87p8v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:10.058: INFO: Pod daemon-set-87p8v is not available May 7 00:35:10.058: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:10.061: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:11.041: INFO: Wrong image for pod: daemon-set-87p8v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:11.041: INFO: Pod daemon-set-87p8v is not available May 7 00:35:11.041: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:11.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:12.040: INFO: Pod daemon-set-6knb6 is not available May 7 00:35:12.040: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:12.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:13.059: INFO: Pod daemon-set-6knb6 is not available May 7 00:35:13.059: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:13.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:14.096: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:14.101: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:15.156: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:15.160: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:16.040: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:16.040: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:16.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:17.107: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:17.107: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:17.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:18.040: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:18.040: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:18.043: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:19.059: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:19.059: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:19.063: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:20.040: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:20.040: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:20.043: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:21.136: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:21.136: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:21.140: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:22.040: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:22.040: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:22.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:23.040: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:23.040: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:23.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:24.040: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:24.040: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:24.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:25.059: INFO: Wrong image for pod: daemon-set-z8jvw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 7 00:35:25.059: INFO: Pod daemon-set-z8jvw is not available May 7 00:35:25.063: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:26.040: INFO: Pod daemon-set-tx7zs is not available May 7 00:35:26.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 7 00:35:26.048: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:26.052: INFO: Number of nodes with available pods: 1 May 7 00:35:26.052: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:35:27.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:27.368: INFO: Number of nodes with available pods: 1 May 7 00:35:27.368: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:35:28.113: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:28.171: INFO: Number of nodes with available pods: 1 May 7 00:35:28.171: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:35:29.057: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:35:29.060: INFO: Number of nodes with available pods: 2 May 7 00:35:29.060: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8820, will wait for the garbage collector to delete the pods May 7 00:35:29.132: INFO: Deleting DaemonSet.extensions daemon-set took: 5.888936ms May 7 00:35:29.433: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.457805ms May 7 00:35:35.345: INFO: Number of nodes with available pods: 0 May 7 00:35:35.345: INFO: Number of running nodes: 0, number of available pods: 0 May 7 00:35:35.348: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8820/daemonsets","resourceVersion":"2171605"},"items":null} May 7 00:35:35.350: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8820/pods","resourceVersion":"2171605"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:35:35.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8820" for this suite. • [SLOW TEST:38.102 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":119,"skipped":1857,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:35:35.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-6118/secret-test-4ee30e88-d234-4eae-92fb-37be15625aa2 STEP: Creating a pod to test consume secrets May 7 00:35:35.480: INFO: Waiting up to 5m0s for pod "pod-configmaps-dcd2207a-ef2f-4f3a-bc02-a235722cd340" in namespace "secrets-6118" to be "Succeeded or Failed" May 7 00:35:35.515: INFO: Pod "pod-configmaps-dcd2207a-ef2f-4f3a-bc02-a235722cd340": Phase="Pending", Reason="", readiness=false. Elapsed: 34.938279ms May 7 00:35:37.518: INFO: Pod "pod-configmaps-dcd2207a-ef2f-4f3a-bc02-a235722cd340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038314497s May 7 00:35:39.522: INFO: Pod "pod-configmaps-dcd2207a-ef2f-4f3a-bc02-a235722cd340": Phase="Running", Reason="", readiness=true. Elapsed: 4.04270194s May 7 00:35:41.527: INFO: Pod "pod-configmaps-dcd2207a-ef2f-4f3a-bc02-a235722cd340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047540874s STEP: Saw pod success May 7 00:35:41.527: INFO: Pod "pod-configmaps-dcd2207a-ef2f-4f3a-bc02-a235722cd340" satisfied condition "Succeeded or Failed" May 7 00:35:41.531: INFO: Trying to get logs from node latest-worker pod pod-configmaps-dcd2207a-ef2f-4f3a-bc02-a235722cd340 container env-test: STEP: delete the pod May 7 00:35:41.618: INFO: Waiting for pod pod-configmaps-dcd2207a-ef2f-4f3a-bc02-a235722cd340 to disappear May 7 00:35:41.625: INFO: Pod pod-configmaps-dcd2207a-ef2f-4f3a-bc02-a235722cd340 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:35:41.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6118" for this suite. • [SLOW TEST:6.268 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":1864,"failed":0} [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:35:41.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:35:41.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7390" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":121,"skipped":1864,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:35:41.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 7 00:35:41.831: INFO: Waiting up to 5m0s for pod "var-expansion-ac9e1524-9d6a-47e9-8811-ce5f1c282039" in namespace "var-expansion-6084" to be "Succeeded or Failed" May 7 00:35:41.885: INFO: Pod "var-expansion-ac9e1524-9d6a-47e9-8811-ce5f1c282039": Phase="Pending", Reason="", readiness=false. Elapsed: 54.290002ms May 7 00:35:43.889: INFO: Pod "var-expansion-ac9e1524-9d6a-47e9-8811-ce5f1c282039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058540947s May 7 00:35:45.920: INFO: Pod "var-expansion-ac9e1524-9d6a-47e9-8811-ce5f1c282039": Phase="Running", Reason="", readiness=true. Elapsed: 4.088691921s May 7 00:35:47.924: INFO: Pod "var-expansion-ac9e1524-9d6a-47e9-8811-ce5f1c282039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093317052s STEP: Saw pod success May 7 00:35:47.924: INFO: Pod "var-expansion-ac9e1524-9d6a-47e9-8811-ce5f1c282039" satisfied condition "Succeeded or Failed" May 7 00:35:47.928: INFO: Trying to get logs from node latest-worker2 pod var-expansion-ac9e1524-9d6a-47e9-8811-ce5f1c282039 container dapi-container: STEP: delete the pod May 7 00:35:47.976: INFO: Waiting for pod var-expansion-ac9e1524-9d6a-47e9-8811-ce5f1c282039 to disappear May 7 00:35:47.980: INFO: Pod var-expansion-ac9e1524-9d6a-47e9-8811-ce5f1c282039 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:35:47.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6084" for this suite. • [SLOW TEST:6.227 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":122,"skipped":1875,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:35:47.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:35:48.043: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3e1eaf1-640a-419f-92b4-44a849c243df" in namespace "projected-9277" to be "Succeeded or Failed" May 7 00:35:48.047: INFO: Pod "downwardapi-volume-a3e1eaf1-640a-419f-92b4-44a849c243df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.313713ms May 7 00:35:50.052: INFO: Pod "downwardapi-volume-a3e1eaf1-640a-419f-92b4-44a849c243df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008230964s May 7 00:35:52.059: INFO: Pod "downwardapi-volume-a3e1eaf1-640a-419f-92b4-44a849c243df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016051085s STEP: Saw pod success May 7 00:35:52.059: INFO: Pod "downwardapi-volume-a3e1eaf1-640a-419f-92b4-44a849c243df" satisfied condition "Succeeded or Failed" May 7 00:35:52.063: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a3e1eaf1-640a-419f-92b4-44a849c243df container client-container: STEP: delete the pod May 7 00:35:52.107: INFO: Waiting for pod downwardapi-volume-a3e1eaf1-640a-419f-92b4-44a849c243df to disappear May 7 00:35:52.113: INFO: Pod downwardapi-volume-a3e1eaf1-640a-419f-92b4-44a849c243df no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:35:52.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9277" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":123,"skipped":1883,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:35:52.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 7 00:35:52.187: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5632 /api/v1/namespaces/watch-5632/configmaps/e2e-watch-test-watch-closed 6920e87a-1fdc-4331-8260-15857dabc96c 2171754 0 2020-05-07 00:35:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-07 00:35:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 7 00:35:52.187: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5632 /api/v1/namespaces/watch-5632/configmaps/e2e-watch-test-watch-closed 6920e87a-1fdc-4331-8260-15857dabc96c 2171755 0 2020-05-07 00:35:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-07 00:35:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 7 00:35:52.199: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5632 /api/v1/namespaces/watch-5632/configmaps/e2e-watch-test-watch-closed 6920e87a-1fdc-4331-8260-15857dabc96c 2171756 0 2020-05-07 00:35:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-07 00:35:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 7 00:35:52.199: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5632 /api/v1/namespaces/watch-5632/configmaps/e2e-watch-test-watch-closed 6920e87a-1fdc-4331-8260-15857dabc96c 2171757 0 2020-05-07 00:35:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-07 00:35:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:35:52.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5632" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":124,"skipped":1884,"failed":0} SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:35:52.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-4adc1eef-2c47-4f52-8a5d-fd0f1ada5836 STEP: Creating secret with name secret-projected-all-test-volume-94ca73e7-8d1c-4881-95f3-d96a2e70d05a STEP: Creating a pod to test Check all projections for projected volume plugin May 7 00:35:52.299: INFO: Waiting up to 5m0s for pod "projected-volume-74591d08-cfed-4a49-95dd-ab00ca0ae905" in namespace "projected-814" to be "Succeeded or Failed" May 7 00:35:52.304: INFO: Pod "projected-volume-74591d08-cfed-4a49-95dd-ab00ca0ae905": Phase="Pending", Reason="", readiness=false. Elapsed: 5.230358ms May 7 00:35:54.308: INFO: Pod "projected-volume-74591d08-cfed-4a49-95dd-ab00ca0ae905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009570277s May 7 00:35:56.422: INFO: Pod "projected-volume-74591d08-cfed-4a49-95dd-ab00ca0ae905": Phase="Running", Reason="", readiness=true. Elapsed: 4.122772939s May 7 00:35:58.425: INFO: Pod "projected-volume-74591d08-cfed-4a49-95dd-ab00ca0ae905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125983465s STEP: Saw pod success May 7 00:35:58.425: INFO: Pod "projected-volume-74591d08-cfed-4a49-95dd-ab00ca0ae905" satisfied condition "Succeeded or Failed" May 7 00:35:58.427: INFO: Trying to get logs from node latest-worker2 pod projected-volume-74591d08-cfed-4a49-95dd-ab00ca0ae905 container projected-all-volume-test: STEP: delete the pod May 7 00:35:58.578: INFO: Waiting for pod projected-volume-74591d08-cfed-4a49-95dd-ab00ca0ae905 to disappear May 7 00:35:58.583: INFO: Pod projected-volume-74591d08-cfed-4a49-95dd-ab00ca0ae905 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:35:58.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-814" for this suite. • [SLOW TEST:6.386 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":125,"skipped":1887,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:35:58.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:35:58.932: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:35:59.134: INFO: Waiting for terminating namespaces to be deleted... May 7 00:35:59.137: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 7 00:35:59.142: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 00:35:59.142: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:35:59.142: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 00:35:59.142: INFO: Container kube-proxy ready: true, restart count 0 May 7 00:35:59.142: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 7 00:35:59.146: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 00:35:59.146: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:35:59.146: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 00:35:59.146: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5d6887cb-dba0-44a1-8105-20f69de91247 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5d6887cb-dba0-44a1-8105-20f69de91247 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5d6887cb-dba0-44a1-8105-20f69de91247 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:36:09.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2170" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.855 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":126,"skipped":1899,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:36:09.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-d957c96e-bc33-42e2-80d9-55c8147e680e STEP: Creating a pod to test consume configMaps May 7 00:36:09.580: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9244e480-6089-46c9-a5cc-56b61cec119a" in namespace "projected-1847" to be "Succeeded or Failed" May 7 00:36:09.636: INFO: Pod "pod-projected-configmaps-9244e480-6089-46c9-a5cc-56b61cec119a": Phase="Pending", Reason="", readiness=false. Elapsed: 55.701547ms May 7 00:36:11.639: INFO: Pod "pod-projected-configmaps-9244e480-6089-46c9-a5cc-56b61cec119a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059305483s May 7 00:36:13.643: INFO: Pod "pod-projected-configmaps-9244e480-6089-46c9-a5cc-56b61cec119a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063386419s May 7 00:36:15.754: INFO: Pod "pod-projected-configmaps-9244e480-6089-46c9-a5cc-56b61cec119a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.173860809s STEP: Saw pod success May 7 00:36:15.754: INFO: Pod "pod-projected-configmaps-9244e480-6089-46c9-a5cc-56b61cec119a" satisfied condition "Succeeded or Failed" May 7 00:36:15.756: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-9244e480-6089-46c9-a5cc-56b61cec119a container projected-configmap-volume-test: STEP: delete the pod May 7 00:36:15.848: INFO: Waiting for pod pod-projected-configmaps-9244e480-6089-46c9-a5cc-56b61cec119a to disappear May 7 00:36:15.915: INFO: Pod pod-projected-configmaps-9244e480-6089-46c9-a5cc-56b61cec119a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:36:15.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1847" for this suite. • [SLOW TEST:6.460 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":127,"skipped":1903,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:36:15.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:36:16.411: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2ec638b-260f-4fba-848e-99e35fa799ef" in namespace "downward-api-5042" to be "Succeeded or Failed" May 7 00:36:16.434: INFO: Pod "downwardapi-volume-b2ec638b-260f-4fba-848e-99e35fa799ef": Phase="Pending", Reason="", readiness=false. Elapsed: 22.479667ms May 7 00:36:18.438: INFO: Pod "downwardapi-volume-b2ec638b-260f-4fba-848e-99e35fa799ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026671398s May 7 00:36:20.442: INFO: Pod "downwardapi-volume-b2ec638b-260f-4fba-848e-99e35fa799ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030118588s May 7 00:36:22.538: INFO: Pod "downwardapi-volume-b2ec638b-260f-4fba-848e-99e35fa799ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126663247s STEP: Saw pod success May 7 00:36:22.538: INFO: Pod "downwardapi-volume-b2ec638b-260f-4fba-848e-99e35fa799ef" satisfied condition "Succeeded or Failed" May 7 00:36:22.541: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b2ec638b-260f-4fba-848e-99e35fa799ef container client-container: STEP: delete the pod May 7 00:36:22.600: INFO: Waiting for pod downwardapi-volume-b2ec638b-260f-4fba-848e-99e35fa799ef to disappear May 7 00:36:22.607: INFO: Pod downwardapi-volume-b2ec638b-260f-4fba-848e-99e35fa799ef no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:36:22.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5042" for this suite. • [SLOW TEST:6.694 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":128,"skipped":1908,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:36:22.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1656 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 7 00:36:22.775: INFO: Found 0 stateful pods, waiting for 3 May 7 00:36:32.844: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 00:36:32.844: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 00:36:32.844: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 7 00:36:42.780: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 00:36:42.780: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 00:36:42.780: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 7 00:36:42.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1656 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 00:36:43.061: INFO: stderr: "I0507 00:36:42.935306 1955 log.go:172] (0xc000b194a0) (0xc000afc320) Create stream\nI0507 00:36:42.935359 1955 log.go:172] (0xc000b194a0) (0xc000afc320) Stream added, broadcasting: 1\nI0507 00:36:42.942126 1955 log.go:172] (0xc000b194a0) Reply frame received for 1\nI0507 00:36:42.942180 1955 log.go:172] (0xc000b194a0) (0xc0006aa640) Create stream\nI0507 00:36:42.942200 1955 log.go:172] (0xc000b194a0) (0xc0006aa640) Stream added, broadcasting: 3\nI0507 00:36:42.943417 1955 log.go:172] (0xc000b194a0) Reply frame received for 3\nI0507 00:36:42.943452 1955 log.go:172] (0xc000b194a0) (0xc000534dc0) Create stream\nI0507 00:36:42.943463 1955 log.go:172] (0xc000b194a0) (0xc000534dc0) Stream added, broadcasting: 5\nI0507 00:36:42.945029 1955 log.go:172] (0xc000b194a0) Reply frame received for 5\nI0507 00:36:43.023490 1955 log.go:172] (0xc000b194a0) Data frame received for 5\nI0507 00:36:43.023517 1955 log.go:172] (0xc000534dc0) (5) Data frame handling\nI0507 00:36:43.023533 1955 log.go:172] (0xc000534dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 00:36:43.053091 1955 log.go:172] (0xc000b194a0) Data frame received for 3\nI0507 00:36:43.053428 1955 log.go:172] (0xc0006aa640) (3) Data frame handling\nI0507 00:36:43.053600 1955 log.go:172] (0xc000b194a0) Data frame received for 5\nI0507 00:36:43.053642 1955 log.go:172] (0xc000534dc0) (5) Data frame handling\nI0507 00:36:43.053691 1955 log.go:172] (0xc0006aa640) (3) Data frame sent\nI0507 00:36:43.053737 1955 log.go:172] (0xc000b194a0) Data frame received for 3\nI0507 00:36:43.053754 1955 log.go:172] (0xc0006aa640) (3) Data frame handling\nI0507 00:36:43.055084 1955 log.go:172] (0xc000b194a0) Data frame received for 1\nI0507 00:36:43.055100 1955 log.go:172] (0xc000afc320) (1) Data frame handling\nI0507 00:36:43.055106 1955 log.go:172] (0xc000afc320) (1) Data frame sent\nI0507 00:36:43.055113 1955 log.go:172] (0xc000b194a0) (0xc000afc320) Stream removed, broadcasting: 1\nI0507 00:36:43.055157 1955 log.go:172] (0xc000b194a0) Go away received\nI0507 00:36:43.055353 1955 log.go:172] (0xc000b194a0) (0xc000afc320) Stream removed, broadcasting: 1\nI0507 00:36:43.055372 1955 log.go:172] (0xc000b194a0) (0xc0006aa640) Stream removed, broadcasting: 3\nI0507 00:36:43.055382 1955 log.go:172] (0xc000b194a0) (0xc000534dc0) Stream removed, broadcasting: 5\n" May 7 00:36:43.061: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 00:36:43.061: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 7 00:36:53.095: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 7 00:37:03.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1656 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 7 00:37:06.124: INFO: stderr: "I0507 00:37:06.027417 1974 log.go:172] (0xc00055c000) (0xc0005a81e0) Create stream\nI0507 00:37:06.027456 1974 log.go:172] (0xc00055c000) (0xc0005a81e0) Stream added, broadcasting: 1\nI0507 00:37:06.030427 1974 log.go:172] (0xc00055c000) Reply frame received for 1\nI0507 00:37:06.030496 1974 log.go:172] (0xc00055c000) (0xc000686500) Create stream\nI0507 00:37:06.030517 1974 log.go:172] (0xc00055c000) (0xc000686500) Stream added, broadcasting: 3\nI0507 00:37:06.031535 1974 log.go:172] (0xc00055c000) Reply frame received for 3\nI0507 00:37:06.031575 1974 log.go:172] (0xc00055c000) (0xc0005a9180) Create stream\nI0507 00:37:06.031587 1974 log.go:172] (0xc00055c000) (0xc0005a9180) Stream added, broadcasting: 5\nI0507 00:37:06.032378 1974 log.go:172] (0xc00055c000) Reply frame received for 5\nI0507 00:37:06.117107 1974 log.go:172] (0xc00055c000) Data frame received for 5\nI0507 00:37:06.117435 1974 log.go:172] (0xc0005a9180) (5) Data frame handling\nI0507 00:37:06.117448 1974 log.go:172] (0xc0005a9180) (5) Data frame sent\nI0507 00:37:06.117456 1974 log.go:172] (0xc00055c000) Data frame received for 5\nI0507 00:37:06.117463 1974 log.go:172] (0xc0005a9180) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0507 00:37:06.117487 1974 log.go:172] (0xc00055c000) Data frame received for 3\nI0507 00:37:06.117496 1974 log.go:172] (0xc000686500) (3) Data frame handling\nI0507 00:37:06.117504 1974 log.go:172] (0xc000686500) (3) Data frame sent\nI0507 00:37:06.117511 1974 log.go:172] (0xc00055c000) Data frame received for 3\nI0507 00:37:06.117517 1974 log.go:172] (0xc000686500) (3) Data frame handling\nI0507 00:37:06.118802 1974 log.go:172] (0xc00055c000) Data frame received for 1\nI0507 00:37:06.118821 1974 log.go:172] (0xc0005a81e0) (1) Data frame handling\nI0507 00:37:06.118833 1974 log.go:172] (0xc0005a81e0) (1) Data frame sent\nI0507 00:37:06.118844 1974 log.go:172] (0xc00055c000) (0xc0005a81e0) Stream removed, broadcasting: 1\nI0507 00:37:06.118918 1974 log.go:172] (0xc00055c000) Go away received\nI0507 00:37:06.119161 1974 log.go:172] (0xc00055c000) (0xc0005a81e0) Stream removed, broadcasting: 1\nI0507 00:37:06.119180 1974 log.go:172] (0xc00055c000) (0xc000686500) Stream removed, broadcasting: 3\nI0507 00:37:06.119189 1974 log.go:172] (0xc00055c000) (0xc0005a9180) Stream removed, broadcasting: 5\n" May 7 00:37:06.124: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 7 00:37:06.124: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 7 00:37:36.152: INFO: Waiting for StatefulSet statefulset-1656/ss2 to complete update STEP: Rolling back to a previous revision May 7 00:37:46.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1656 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 00:37:46.443: INFO: stderr: "I0507 00:37:46.310934 2010 log.go:172] (0xc0009e9290) (0xc000850fa0) Create stream\nI0507 00:37:46.311018 2010 log.go:172] (0xc0009e9290) (0xc000850fa0) Stream added, broadcasting: 1\nI0507 00:37:46.314045 2010 log.go:172] (0xc0009e9290) Reply frame received for 1\nI0507 00:37:46.314095 2010 log.go:172] (0xc0009e9290) (0xc000851540) Create stream\nI0507 00:37:46.314107 2010 log.go:172] (0xc0009e9290) (0xc000851540) Stream added, broadcasting: 3\nI0507 00:37:46.315009 2010 log.go:172] (0xc0009e9290) Reply frame received for 3\nI0507 00:37:46.315049 2010 log.go:172] (0xc0009e9290) (0xc0004155e0) Create stream\nI0507 00:37:46.315069 2010 log.go:172] (0xc0009e9290) (0xc0004155e0) Stream added, broadcasting: 5\nI0507 00:37:46.315998 2010 log.go:172] (0xc0009e9290) Reply frame received for 5\nI0507 00:37:46.402182 2010 log.go:172] (0xc0009e9290) Data frame received for 5\nI0507 00:37:46.402204 2010 log.go:172] (0xc0004155e0) (5) Data frame handling\nI0507 00:37:46.402216 2010 log.go:172] (0xc0004155e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 00:37:46.433471 2010 log.go:172] (0xc0009e9290) Data frame received for 3\nI0507 00:37:46.433512 2010 log.go:172] (0xc000851540) (3) Data frame handling\nI0507 00:37:46.433530 2010 log.go:172] (0xc000851540) (3) Data frame sent\nI0507 00:37:46.433653 2010 log.go:172] (0xc0009e9290) Data frame received for 3\nI0507 00:37:46.433685 2010 log.go:172] (0xc000851540) (3) Data frame handling\nI0507 00:37:46.433874 2010 log.go:172] (0xc0009e9290) Data frame received for 5\nI0507 00:37:46.433893 2010 log.go:172] (0xc0004155e0) (5) Data frame handling\nI0507 00:37:46.436089 2010 log.go:172] (0xc0009e9290) Data frame received for 1\nI0507 00:37:46.436228 2010 log.go:172] (0xc000850fa0) (1) Data frame handling\nI0507 00:37:46.436275 2010 log.go:172] (0xc000850fa0) (1) Data frame sent\nI0507 00:37:46.436328 2010 log.go:172] (0xc0009e9290) (0xc000850fa0) Stream removed, broadcasting: 1\nI0507 00:37:46.436375 2010 log.go:172] (0xc0009e9290) Go away received\nI0507 00:37:46.439556 2010 log.go:172] (0xc0009e9290) (0xc000850fa0) Stream removed, broadcasting: 1\nI0507 00:37:46.439588 2010 log.go:172] (0xc0009e9290) (0xc000851540) Stream removed, broadcasting: 3\nI0507 00:37:46.439615 2010 log.go:172] (0xc0009e9290) (0xc0004155e0) Stream removed, broadcasting: 5\n" May 7 00:37:46.443: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 00:37:46.443: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 7 00:37:56.477: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 7 00:38:06.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1656 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 7 00:38:06.869: INFO: stderr: "I0507 00:38:06.775115 2030 log.go:172] (0xc000b96370) (0xc000646a00) Create stream\nI0507 00:38:06.775299 2030 log.go:172] (0xc000b96370) (0xc000646a00) Stream added, broadcasting: 1\nI0507 00:38:06.779031 2030 log.go:172] (0xc000b96370) Reply frame received for 1\nI0507 00:38:06.779097 2030 log.go:172] (0xc000b96370) (0xc0006472c0) Create stream\nI0507 00:38:06.779121 2030 log.go:172] (0xc000b96370) (0xc0006472c0) Stream added, broadcasting: 3\nI0507 00:38:06.780224 2030 log.go:172] (0xc000b96370) Reply frame received for 3\nI0507 00:38:06.780260 2030 log.go:172] (0xc000b96370) (0xc0006477c0) Create stream\nI0507 00:38:06.780278 2030 log.go:172] (0xc000b96370) (0xc0006477c0) Stream added, broadcasting: 5\nI0507 00:38:06.781069 2030 log.go:172] (0xc000b96370) Reply frame received for 5\nI0507 00:38:06.862198 2030 log.go:172] (0xc000b96370) Data frame received for 3\nI0507 00:38:06.862232 2030 log.go:172] (0xc0006472c0) (3) Data frame handling\nI0507 00:38:06.862250 2030 log.go:172] (0xc0006472c0) (3) Data frame sent\nI0507 00:38:06.862260 2030 log.go:172] (0xc000b96370) Data frame received for 3\nI0507 00:38:06.862267 2030 log.go:172] (0xc0006472c0) (3) Data frame handling\nI0507 00:38:06.862296 2030 log.go:172] (0xc000b96370) Data frame received for 5\nI0507 00:38:06.862307 2030 log.go:172] (0xc0006477c0) (5) Data frame handling\nI0507 00:38:06.862328 2030 log.go:172] (0xc0006477c0) (5) Data frame sent\nI0507 00:38:06.862336 2030 log.go:172] (0xc000b96370) Data frame received for 5\nI0507 00:38:06.862342 2030 log.go:172] (0xc0006477c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0507 00:38:06.863671 2030 log.go:172] (0xc000b96370) Data frame received for 1\nI0507 00:38:06.863748 2030 log.go:172] (0xc000646a00) (1) Data frame handling\nI0507 00:38:06.863764 2030 log.go:172] (0xc000646a00) (1) Data frame sent\nI0507 00:38:06.863776 2030 log.go:172] (0xc000b96370) (0xc000646a00) Stream removed, broadcasting: 1\nI0507 00:38:06.863789 2030 log.go:172] (0xc000b96370) Go away received\nI0507 00:38:06.864125 2030 log.go:172] (0xc000b96370) (0xc000646a00) Stream removed, broadcasting: 1\nI0507 00:38:06.864153 2030 log.go:172] (0xc000b96370) (0xc0006472c0) Stream removed, broadcasting: 3\nI0507 00:38:06.864163 2030 log.go:172] (0xc000b96370) (0xc0006477c0) Stream removed, broadcasting: 5\n" May 7 00:38:06.869: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 7 00:38:06.869: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 7 00:38:16.935: INFO: Waiting for StatefulSet statefulset-1656/ss2 to complete update May 7 00:38:16.935: INFO: Waiting for Pod statefulset-1656/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 7 00:38:16.935: INFO: Waiting for Pod statefulset-1656/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 7 00:38:26.971: INFO: Waiting for StatefulSet statefulset-1656/ss2 to complete update May 7 00:38:26.971: INFO: Waiting for Pod statefulset-1656/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 7 00:38:26.971: INFO: Waiting for Pod statefulset-1656/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 7 00:38:37.557: INFO: Waiting for StatefulSet statefulset-1656/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 7 00:38:46.943: INFO: Deleting all statefulset in ns statefulset-1656 May 7 00:38:46.946: INFO: Scaling statefulset ss2 to 0 May 7 00:39:06.966: INFO: Waiting for statefulset status.replicas updated to 0 May 7 00:39:06.969: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:39:06.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1656" for this suite. • [SLOW TEST:164.371 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":129,"skipped":1922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:39:06.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 7 00:39:07.071: INFO: Waiting up to 5m0s for pod "pod-ab606ada-12fa-4e82-8442-a3ffb95db821" in namespace "emptydir-3096" to be "Succeeded or Failed" May 7 00:39:07.107: INFO: Pod "pod-ab606ada-12fa-4e82-8442-a3ffb95db821": Phase="Pending", Reason="", readiness=false. Elapsed: 35.804219ms May 7 00:39:09.112: INFO: Pod "pod-ab606ada-12fa-4e82-8442-a3ffb95db821": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040245863s May 7 00:39:11.116: INFO: Pod "pod-ab606ada-12fa-4e82-8442-a3ffb95db821": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044624242s STEP: Saw pod success May 7 00:39:11.116: INFO: Pod "pod-ab606ada-12fa-4e82-8442-a3ffb95db821" satisfied condition "Succeeded or Failed" May 7 00:39:11.120: INFO: Trying to get logs from node latest-worker2 pod pod-ab606ada-12fa-4e82-8442-a3ffb95db821 container test-container: STEP: delete the pod May 7 00:39:11.326: INFO: Waiting for pod pod-ab606ada-12fa-4e82-8442-a3ffb95db821 to disappear May 7 00:39:11.390: INFO: Pod pod-ab606ada-12fa-4e82-8442-a3ffb95db821 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:39:11.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3096" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":1977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:39:11.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-7794edea-80bf-43ca-8858-332882994a91 STEP: Creating a pod to test consume configMaps May 7 00:39:11.494: INFO: Waiting up to 5m0s for pod "pod-configmaps-4513cfe6-6eca-44c9-9c47-696caf8b1200" in namespace "configmap-9730" to be "Succeeded or Failed" May 7 00:39:11.509: INFO: Pod "pod-configmaps-4513cfe6-6eca-44c9-9c47-696caf8b1200": Phase="Pending", Reason="", readiness=false. Elapsed: 15.170974ms May 7 00:39:13.516: INFO: Pod "pod-configmaps-4513cfe6-6eca-44c9-9c47-696caf8b1200": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022586848s May 7 00:39:15.519: INFO: Pod "pod-configmaps-4513cfe6-6eca-44c9-9c47-696caf8b1200": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025287124s STEP: Saw pod success May 7 00:39:15.519: INFO: Pod "pod-configmaps-4513cfe6-6eca-44c9-9c47-696caf8b1200" satisfied condition "Succeeded or Failed" May 7 00:39:15.526: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4513cfe6-6eca-44c9-9c47-696caf8b1200 container configmap-volume-test: STEP: delete the pod May 7 00:39:15.596: INFO: Waiting for pod pod-configmaps-4513cfe6-6eca-44c9-9c47-696caf8b1200 to disappear May 7 00:39:15.701: INFO: Pod pod-configmaps-4513cfe6-6eca-44c9-9c47-696caf8b1200 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:39:15.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9730" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":131,"skipped":2016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:39:15.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:39:15.969: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Pending, waiting for it to be Running (with Ready = true) May 7 00:39:17.973: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Pending, waiting for it to be Running (with Ready = true) May 7 00:39:19.973: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Running (Ready = false) May 7 00:39:22.036: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Running (Ready = false) May 7 00:39:23.974: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Running (Ready = false) May 7 00:39:25.973: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Running (Ready = false) May 7 00:39:28.018: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Running (Ready = false) May 7 00:39:29.973: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Running (Ready = false) May 7 00:39:31.994: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Running (Ready = false) May 7 00:39:33.973: INFO: The status of Pod test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 is Running (Ready = true) May 7 00:39:33.975: INFO: Container started at 2020-05-07 00:39:18 +0000 UTC, pod became ready at 2020-05-07 00:39:33 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:39:33.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9217" for this suite. • [SLOW TEST:18.308 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:39:34.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:39:34.158: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:39:34.175: INFO: Waiting for terminating namespaces to be deleted... May 7 00:39:34.177: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 7 00:39:34.181: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 00:39:34.181: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:39:34.181: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 00:39:34.181: INFO: Container kube-proxy ready: true, restart count 0 May 7 00:39:34.181: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 7 00:39:34.185: INFO: test-webserver-ff7dd579-2998-4225-a8fa-ee252024cd40 from container-probe-9217 started at 2020-05-07 00:39:16 +0000 UTC (1 container statuses recorded) May 7 00:39:34.185: INFO: Container test-webserver ready: true, restart count 0 May 7 00:39:34.185: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 00:39:34.185: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:39:34.185: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 00:39:34.185: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cfbdcba6-efdc-4581-89e5-ac4d59fef132 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-cfbdcba6-efdc-4581-89e5-ac4d59fef132 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-cfbdcba6-efdc-4581-89e5-ac4d59fef132 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:44:44.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3631" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:310.391 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":133,"skipped":2112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:44:44.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-be691ab1-1b8f-40a2-9a38-6081e06729e8 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:44:44.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3930" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":134,"skipped":2148,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:44:44.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 7 00:44:44.659: INFO: Waiting up to 5m0s for pod "pod-63a99d5c-d55d-4d41-abec-9f2347ec8911" in namespace "emptydir-9480" to be "Succeeded or Failed" May 7 00:44:44.704: INFO: Pod "pod-63a99d5c-d55d-4d41-abec-9f2347ec8911": Phase="Pending", Reason="", readiness=false. Elapsed: 45.327394ms May 7 00:44:46.708: INFO: Pod "pod-63a99d5c-d55d-4d41-abec-9f2347ec8911": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049458936s May 7 00:44:48.800: INFO: Pod "pod-63a99d5c-d55d-4d41-abec-9f2347ec8911": Phase="Running", Reason="", readiness=true. Elapsed: 4.140874181s May 7 00:44:50.803: INFO: Pod "pod-63a99d5c-d55d-4d41-abec-9f2347ec8911": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143820997s STEP: Saw pod success May 7 00:44:50.803: INFO: Pod "pod-63a99d5c-d55d-4d41-abec-9f2347ec8911" satisfied condition "Succeeded or Failed" May 7 00:44:50.808: INFO: Trying to get logs from node latest-worker2 pod pod-63a99d5c-d55d-4d41-abec-9f2347ec8911 container test-container: STEP: delete the pod May 7 00:44:50.848: INFO: Waiting for pod pod-63a99d5c-d55d-4d41-abec-9f2347ec8911 to disappear May 7 00:44:50.863: INFO: Pod pod-63a99d5c-d55d-4d41-abec-9f2347ec8911 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:44:50.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9480" for this suite. • [SLOW TEST:6.327 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2166,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:44:50.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 7 00:44:50.993: INFO: Waiting up to 5m0s for pod "var-expansion-949510d2-ec63-4758-b516-16b2efedd4e4" in namespace "var-expansion-349" to be "Succeeded or Failed" May 7 00:44:51.075: INFO: Pod "var-expansion-949510d2-ec63-4758-b516-16b2efedd4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 81.688078ms May 7 00:44:53.079: INFO: Pod "var-expansion-949510d2-ec63-4758-b516-16b2efedd4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085567998s May 7 00:44:55.082: INFO: Pod "var-expansion-949510d2-ec63-4758-b516-16b2efedd4e4": Phase="Running", Reason="", readiness=true. Elapsed: 4.089147058s May 7 00:44:57.086: INFO: Pod "var-expansion-949510d2-ec63-4758-b516-16b2efedd4e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093209427s STEP: Saw pod success May 7 00:44:57.086: INFO: Pod "var-expansion-949510d2-ec63-4758-b516-16b2efedd4e4" satisfied condition "Succeeded or Failed" May 7 00:44:57.089: INFO: Trying to get logs from node latest-worker2 pod var-expansion-949510d2-ec63-4758-b516-16b2efedd4e4 container dapi-container: STEP: delete the pod May 7 00:44:57.155: INFO: Waiting for pod var-expansion-949510d2-ec63-4758-b516-16b2efedd4e4 to disappear May 7 00:44:57.326: INFO: Pod var-expansion-949510d2-ec63-4758-b516-16b2efedd4e4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:44:57.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-349" for this suite. • [SLOW TEST:6.466 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2170,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:44:57.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-2lzt STEP: Creating a pod to test atomic-volume-subpath May 7 00:44:57.681: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2lzt" in namespace "subpath-1846" to be "Succeeded or Failed" May 7 00:44:57.691: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Pending", Reason="", readiness=false. Elapsed: 9.177202ms May 7 00:44:59.695: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013222291s May 7 00:45:01.699: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 4.017632472s May 7 00:45:03.703: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 6.02169026s May 7 00:45:05.873: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 8.191202568s May 7 00:45:07.876: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 10.194441515s May 7 00:45:09.880: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 12.198501497s May 7 00:45:11.888: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 14.207048207s May 7 00:45:13.891: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 16.210102735s May 7 00:45:15.895: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 18.213678558s May 7 00:45:17.900: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 20.218425479s May 7 00:45:19.904: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Running", Reason="", readiness=true. Elapsed: 22.222284303s May 7 00:45:21.907: INFO: Pod "pod-subpath-test-secret-2lzt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.226036017s STEP: Saw pod success May 7 00:45:21.907: INFO: Pod "pod-subpath-test-secret-2lzt" satisfied condition "Succeeded or Failed" May 7 00:45:21.911: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-2lzt container test-container-subpath-secret-2lzt: STEP: delete the pod May 7 00:45:21.992: INFO: Waiting for pod pod-subpath-test-secret-2lzt to disappear May 7 00:45:22.001: INFO: Pod pod-subpath-test-secret-2lzt no longer exists STEP: Deleting pod pod-subpath-test-secret-2lzt May 7 00:45:22.001: INFO: Deleting pod "pod-subpath-test-secret-2lzt" in namespace "subpath-1846" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:45:22.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1846" for this suite. • [SLOW TEST:24.645 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":137,"skipped":2190,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:45:22.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2882.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2882.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 00:45:28.238: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:28.241: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:28.244: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:28.247: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:28.258: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:28.261: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:28.265: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:28.268: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:28.274: INFO: Lookups using dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local] May 7 00:45:34.184: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:34.279: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:34.282: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:34.320: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:34.348: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:34.350: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:34.353: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:34.355: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:34.362: INFO: Lookups using dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local] May 7 00:45:38.279: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:38.282: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:38.285: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:38.315: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:38.325: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:38.328: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:38.331: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:38.333: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:38.339: INFO: Lookups using dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local] May 7 00:45:43.279: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:43.283: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:43.286: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:43.290: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:43.300: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:43.304: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:43.306: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:43.309: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:43.316: INFO: Lookups using dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local] May 7 00:45:48.279: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:48.283: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:48.308: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:48.312: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:48.322: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:48.325: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:48.328: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:48.331: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:48.337: INFO: Lookups using dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local] May 7 00:45:53.280: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:53.284: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:53.287: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:53.289: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:53.296: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:53.298: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:53.300: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:53.302: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local from pod dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496: the server could not find the requested resource (get pods dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496) May 7 00:45:53.306: INFO: Lookups using dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2882.svc.cluster.local jessie_udp@dns-test-service-2.dns-2882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2882.svc.cluster.local] May 7 00:45:58.319: INFO: DNS probes using dns-2882/dns-test-3d9990c9-9d09-4894-b705-1f7f0c9e8496 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:45:59.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2882" for this suite. • [SLOW TEST:37.755 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":138,"skipped":2202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:45:59.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:46:00.410: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:46:00.761: INFO: Waiting for terminating namespaces to be deleted... May 7 00:46:00.763: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 7 00:46:00.767: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 00:46:00.767: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:46:00.767: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 00:46:00.767: INFO: Container kube-proxy ready: true, restart count 0 May 7 00:46:00.767: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 7 00:46:00.771: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 00:46:00.771: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:46:00.771: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 00:46:00.771: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 7 00:46:01.399: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 7 00:46:01.399: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 7 00:46:01.399: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 7 00:46:01.399: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 7 00:46:01.399: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 7 00:46:01.407: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-823f6ad6-1934-45a0-9d14-ca1fd970573a.160c986ccc5dc17d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-96/filler-pod-823f6ad6-1934-45a0-9d14-ca1fd970573a to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-823f6ad6-1934-45a0-9d14-ca1fd970573a.160c986d5c8f3ed7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-823f6ad6-1934-45a0-9d14-ca1fd970573a.160c986e7a26f649], Reason = [Created], Message = [Created container filler-pod-823f6ad6-1934-45a0-9d14-ca1fd970573a] STEP: Considering event: Type = [Normal], Name = [filler-pod-823f6ad6-1934-45a0-9d14-ca1fd970573a.160c986eaa2fd911], Reason = [Started], Message = [Started container filler-pod-823f6ad6-1934-45a0-9d14-ca1fd970573a] STEP: Considering event: Type = [Normal], Name = [filler-pod-e02d68ff-ad53-4997-b1cf-966411170b7b.160c986ccb71d05e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-96/filler-pod-e02d68ff-ad53-4997-b1cf-966411170b7b to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e02d68ff-ad53-4997-b1cf-966411170b7b.160c986da2a08e9e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e02d68ff-ad53-4997-b1cf-966411170b7b.160c986e7a22d5e4], Reason = [Created], Message = [Created container filler-pod-e02d68ff-ad53-4997-b1cf-966411170b7b] STEP: Considering event: Type = [Normal], Name = [filler-pod-e02d68ff-ad53-4997-b1cf-966411170b7b.160c986eb9087e16], Reason = [Started], Message = [Started container filler-pod-e02d68ff-ad53-4997-b1cf-966411170b7b] STEP: Considering event: Type = [Warning], Name = [additional-pod.160c986f4743550c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160c986f48650a33], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:46:14.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-96" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.374 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":139,"skipped":2234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:46:14.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:46:14.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61319348-0cdb-4052-8d29-ee6b6b50ff4d" in namespace "projected-2690" to be "Succeeded or Failed" May 7 00:46:15.021: INFO: Pod "downwardapi-volume-61319348-0cdb-4052-8d29-ee6b6b50ff4d": Phase="Pending", Reason="", readiness=false. Elapsed: 70.067405ms May 7 00:46:17.321: INFO: Pod "downwardapi-volume-61319348-0cdb-4052-8d29-ee6b6b50ff4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369880782s May 7 00:46:20.065: INFO: Pod "downwardapi-volume-61319348-0cdb-4052-8d29-ee6b6b50ff4d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.113771513s May 7 00:46:22.093: INFO: Pod "downwardapi-volume-61319348-0cdb-4052-8d29-ee6b6b50ff4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.141942022s STEP: Saw pod success May 7 00:46:22.093: INFO: Pod "downwardapi-volume-61319348-0cdb-4052-8d29-ee6b6b50ff4d" satisfied condition "Succeeded or Failed" May 7 00:46:22.148: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-61319348-0cdb-4052-8d29-ee6b6b50ff4d container client-container: STEP: delete the pod May 7 00:46:23.220: INFO: Waiting for pod downwardapi-volume-61319348-0cdb-4052-8d29-ee6b6b50ff4d to disappear May 7 00:46:23.507: INFO: Pod downwardapi-volume-61319348-0cdb-4052-8d29-ee6b6b50ff4d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:46:23.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2690" for this suite. • [SLOW TEST:9.595 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":140,"skipped":2289,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:46:23.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:47:24.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8298" for this suite. • [SLOW TEST:61.018 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":141,"skipped":2315,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:47:24.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:47:37.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5561" for this suite. • [SLOW TEST:12.822 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":142,"skipped":2326,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:47:37.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-251d325e-8590-42b9-b629-23e3ceba6a51 STEP: Creating a pod to test consume secrets May 7 00:47:37.916: INFO: Waiting up to 5m0s for pod "pod-secrets-bb5741ba-ad48-4943-b6d1-865310bbb48a" in namespace "secrets-3027" to be "Succeeded or Failed" May 7 00:47:38.407: INFO: Pod "pod-secrets-bb5741ba-ad48-4943-b6d1-865310bbb48a": Phase="Pending", Reason="", readiness=false. Elapsed: 490.422772ms May 7 00:47:40.411: INFO: Pod "pod-secrets-bb5741ba-ad48-4943-b6d1-865310bbb48a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.495227123s May 7 00:47:42.416: INFO: Pod "pod-secrets-bb5741ba-ad48-4943-b6d1-865310bbb48a": Phase="Running", Reason="", readiness=true. Elapsed: 4.500014749s May 7 00:47:44.420: INFO: Pod "pod-secrets-bb5741ba-ad48-4943-b6d1-865310bbb48a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.504146158s STEP: Saw pod success May 7 00:47:44.420: INFO: Pod "pod-secrets-bb5741ba-ad48-4943-b6d1-865310bbb48a" satisfied condition "Succeeded or Failed" May 7 00:47:44.423: INFO: Trying to get logs from node latest-worker pod pod-secrets-bb5741ba-ad48-4943-b6d1-865310bbb48a container secret-volume-test: STEP: delete the pod May 7 00:47:44.473: INFO: Waiting for pod pod-secrets-bb5741ba-ad48-4943-b6d1-865310bbb48a to disappear May 7 00:47:44.484: INFO: Pod pod-secrets-bb5741ba-ad48-4943-b6d1-865310bbb48a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:47:44.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3027" for this suite. • [SLOW TEST:6.910 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:47:44.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:47:45.060: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:47:47.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409265, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409265, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409265, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409265, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:47:49.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409265, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409265, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409265, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409265, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:47:52.264: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:02.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5900" for this suite. STEP: Destroying namespace "webhook-5900-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.029 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":144,"skipped":2394,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:02.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:08.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1252" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":145,"skipped":2398,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:08.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-07bc3ad9-7b87-4812-86c0-243ffb428be3 STEP: Creating a pod to test consume configMaps May 7 00:48:09.116: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-910bcd6c-86d9-4246-b833-32d8dac98959" in namespace "projected-8872" to be "Succeeded or Failed" May 7 00:48:09.126: INFO: Pod "pod-projected-configmaps-910bcd6c-86d9-4246-b833-32d8dac98959": Phase="Pending", Reason="", readiness=false. Elapsed: 9.627943ms May 7 00:48:11.130: INFO: Pod "pod-projected-configmaps-910bcd6c-86d9-4246-b833-32d8dac98959": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014037805s May 7 00:48:13.184: INFO: Pod "pod-projected-configmaps-910bcd6c-86d9-4246-b833-32d8dac98959": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067377246s STEP: Saw pod success May 7 00:48:13.184: INFO: Pod "pod-projected-configmaps-910bcd6c-86d9-4246-b833-32d8dac98959" satisfied condition "Succeeded or Failed" May 7 00:48:13.187: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-910bcd6c-86d9-4246-b833-32d8dac98959 container projected-configmap-volume-test: STEP: delete the pod May 7 00:48:13.373: INFO: Waiting for pod pod-projected-configmaps-910bcd6c-86d9-4246-b833-32d8dac98959 to disappear May 7 00:48:13.402: INFO: Pod pod-projected-configmaps-910bcd6c-86d9-4246-b833-32d8dac98959 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:13.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8872" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2420,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:13.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-2c318c95-5065-44ba-a101-ec1fa6f642e8 STEP: Creating a pod to test consume secrets May 7 00:48:13.617: INFO: Waiting up to 5m0s for pod "pod-secrets-568da87c-b7ef-43da-a20f-3234794ce702" in namespace "secrets-1049" to be "Succeeded or Failed" May 7 00:48:13.647: INFO: Pod "pod-secrets-568da87c-b7ef-43da-a20f-3234794ce702": Phase="Pending", Reason="", readiness=false. Elapsed: 30.125223ms May 7 00:48:15.651: INFO: Pod "pod-secrets-568da87c-b7ef-43da-a20f-3234794ce702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034407562s May 7 00:48:17.656: INFO: Pod "pod-secrets-568da87c-b7ef-43da-a20f-3234794ce702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038515742s STEP: Saw pod success May 7 00:48:17.656: INFO: Pod "pod-secrets-568da87c-b7ef-43da-a20f-3234794ce702" satisfied condition "Succeeded or Failed" May 7 00:48:17.658: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-568da87c-b7ef-43da-a20f-3234794ce702 container secret-volume-test: STEP: delete the pod May 7 00:48:17.706: INFO: Waiting for pod pod-secrets-568da87c-b7ef-43da-a20f-3234794ce702 to disappear May 7 00:48:17.724: INFO: Pod pod-secrets-568da87c-b7ef-43da-a20f-3234794ce702 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:17.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1049" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2466,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:17.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 7 00:48:17.883: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:25.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8110" for this suite. • [SLOW TEST:8.230 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":148,"skipped":2467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:25.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:48:26.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09d71427-677c-4cf8-adb6-392eb58ff71c" in namespace "downward-api-706" to be "Succeeded or Failed" May 7 00:48:26.187: INFO: Pod "downwardapi-volume-09d71427-677c-4cf8-adb6-392eb58ff71c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.866348ms May 7 00:48:28.191: INFO: Pod "downwardapi-volume-09d71427-677c-4cf8-adb6-392eb58ff71c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039885004s May 7 00:48:30.195: INFO: Pod "downwardapi-volume-09d71427-677c-4cf8-adb6-392eb58ff71c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043715429s STEP: Saw pod success May 7 00:48:30.195: INFO: Pod "downwardapi-volume-09d71427-677c-4cf8-adb6-392eb58ff71c" satisfied condition "Succeeded or Failed" May 7 00:48:30.197: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-09d71427-677c-4cf8-adb6-392eb58ff71c container client-container: STEP: delete the pod May 7 00:48:30.419: INFO: Waiting for pod downwardapi-volume-09d71427-677c-4cf8-adb6-392eb58ff71c to disappear May 7 00:48:30.499: INFO: Pod downwardapi-volume-09d71427-677c-4cf8-adb6-392eb58ff71c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:30.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-706" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":149,"skipped":2490,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:30.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:48:30.602: INFO: Creating deployment "test-recreate-deployment" May 7 00:48:30.622: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 7 00:48:30.642: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 7 00:48:32.650: INFO: Waiting deployment "test-recreate-deployment" to complete May 7 00:48:32.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409310, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409310, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409310, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409310, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:48:34.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409310, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409310, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409310, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409310, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:48:36.657: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 7 00:48:36.666: INFO: Updating deployment test-recreate-deployment May 7 00:48:36.666: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 7 00:48:37.229: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-497 /apis/apps/v1/namespaces/deployment-497/deployments/test-recreate-deployment 6f8ea8c3-6f48-4e0f-a697-c815b1093b52 2175126 2 2020-05-07 00:48:30 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-07 00:48:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-07 00:48:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c61de8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-07 00:48:36 +0000 UTC,LastTransitionTime:2020-05-07 00:48:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-07 00:48:36 +0000 UTC,LastTransitionTime:2020-05-07 00:48:30 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 7 00:48:37.232: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-497 /apis/apps/v1/namespaces/deployment-497/replicasets/test-recreate-deployment-d5667d9c7 85bf9d17-c5d5-487b-a781-419b9558fb6f 2175125 1 2020-05-07 00:48:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 6f8ea8c3-6f48-4e0f-a697-c815b1093b52 0xc003064320 0xc003064321}] [] [{kube-controller-manager Update apps/v1 2020-05-07 00:48:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f8ea8c3-6f48-4e0f-a697-c815b1093b52\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003064398 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 7 00:48:37.232: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 7 00:48:37.232: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-497 /apis/apps/v1/namespaces/deployment-497/replicasets/test-recreate-deployment-6d65b9f6d8 1d8c301e-8710-43d7-bd13-d437800e24e3 2175115 2 2020-05-07 00:48:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 6f8ea8c3-6f48-4e0f-a697-c815b1093b52 0xc003064207 0xc003064208}] [] [{kube-controller-manager Update apps/v1 2020-05-07 00:48:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f8ea8c3-6f48-4e0f-a697-c815b1093b52\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030642b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 7 00:48:37.275: INFO: Pod "test-recreate-deployment-d5667d9c7-brwpc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-brwpc test-recreate-deployment-d5667d9c7- deployment-497 /api/v1/namespaces/deployment-497/pods/test-recreate-deployment-d5667d9c7-brwpc 809c8f25-c58b-4139-bbe6-7ccdbce51c09 2175129 0 2020-05-07 00:48:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 85bf9d17-c5d5-487b-a781-419b9558fb6f 0xc003064860 0xc003064861}] [] [{kube-controller-manager Update v1 2020-05-07 00:48:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bf9d17-c5d5-487b-a781-419b9558fb6f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-07 00:48:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jm6cz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jm6cz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jm6cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:48:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:48:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-07 00:48:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-07 00:48:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:37.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-497" for this suite. • [SLOW TEST:6.782 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":150,"skipped":2510,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:37.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-faa39dc9-b323-4847-9611-d8aaa62ce3e7 STEP: Creating a pod to test consume secrets May 7 00:48:37.381: INFO: Waiting up to 5m0s for pod "pod-secrets-e753a993-dea0-49ab-a95b-b2f9cdb81660" in namespace "secrets-8177" to be "Succeeded or Failed" May 7 00:48:37.415: INFO: Pod "pod-secrets-e753a993-dea0-49ab-a95b-b2f9cdb81660": Phase="Pending", Reason="", readiness=false. Elapsed: 33.420707ms May 7 00:48:39.447: INFO: Pod "pod-secrets-e753a993-dea0-49ab-a95b-b2f9cdb81660": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065972445s May 7 00:48:41.452: INFO: Pod "pod-secrets-e753a993-dea0-49ab-a95b-b2f9cdb81660": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070628873s STEP: Saw pod success May 7 00:48:41.452: INFO: Pod "pod-secrets-e753a993-dea0-49ab-a95b-b2f9cdb81660" satisfied condition "Succeeded or Failed" May 7 00:48:41.455: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e753a993-dea0-49ab-a95b-b2f9cdb81660 container secret-volume-test: STEP: delete the pod May 7 00:48:41.488: INFO: Waiting for pod pod-secrets-e753a993-dea0-49ab-a95b-b2f9cdb81660 to disappear May 7 00:48:41.498: INFO: Pod pod-secrets-e753a993-dea0-49ab-a95b-b2f9cdb81660 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:41.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8177" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":151,"skipped":2528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:41.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 7 00:48:46.183: INFO: Successfully updated pod "labelsupdatefa75bbd8-edc2-427d-baf2-4bc1f5e30a70" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:50.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8208" for this suite. • [SLOW TEST:8.752 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":152,"skipped":2580,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:50.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:50.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6086" for this suite. STEP: Destroying namespace "nspatchtest-fda82efa-7977-4e2a-a01b-f411cc3dd97c-5084" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":153,"skipped":2600,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:50.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 7 00:48:50.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 7 00:48:50.700: INFO: stderr: "" May 7 00:48:50.700: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:48:50.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2472" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":154,"skipped":2601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:48:50.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-f6c2a4ae-5f5e-4ac8-8aa6-4a3014258657 STEP: Creating secret with name s-test-opt-upd-b49da78c-55f7-41d8-884b-df93fbbf9392 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f6c2a4ae-5f5e-4ac8-8aa6-4a3014258657 STEP: Updating secret s-test-opt-upd-b49da78c-55f7-41d8-884b-df93fbbf9392 STEP: Creating secret with name s-test-opt-create-b1e7aa00-b677-49e1-b234-cc263667b694 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:50:12.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9804" for this suite. • [SLOW TEST:81.415 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":155,"skipped":2662,"failed":0} [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:50:12.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:50:12.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7248" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":156,"skipped":2662,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:50:12.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-39eeb407-53b1-4857-b0ce-c1f4fb2e9a07 in namespace container-probe-7161 May 7 00:50:16.500: INFO: Started pod liveness-39eeb407-53b1-4857-b0ce-c1f4fb2e9a07 in namespace container-probe-7161 STEP: checking the pod's current state and verifying that restartCount is present May 7 00:50:16.503: INFO: Initial restart count of pod liveness-39eeb407-53b1-4857-b0ce-c1f4fb2e9a07 is 0 May 7 00:50:32.554: INFO: Restart count of pod container-probe-7161/liveness-39eeb407-53b1-4857-b0ce-c1f4fb2e9a07 is now 1 (16.051471494s elapsed) May 7 00:50:52.593: INFO: Restart count of pod container-probe-7161/liveness-39eeb407-53b1-4857-b0ce-c1f4fb2e9a07 is now 2 (36.09034297s elapsed) May 7 00:51:13.305: INFO: Restart count of pod container-probe-7161/liveness-39eeb407-53b1-4857-b0ce-c1f4fb2e9a07 is now 3 (56.801637556s elapsed) May 7 00:51:31.340: INFO: Restart count of pod container-probe-7161/liveness-39eeb407-53b1-4857-b0ce-c1f4fb2e9a07 is now 4 (1m14.836827244s elapsed) May 7 00:52:35.716: INFO: Restart count of pod container-probe-7161/liveness-39eeb407-53b1-4857-b0ce-c1f4fb2e9a07 is now 5 (2m19.213478679s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:52:35.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7161" for this suite. • [SLOW TEST:143.409 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":157,"skipped":2674,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:52:35.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 7 00:52:35.880: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:52:35.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2264" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":158,"skipped":2696,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:52:36.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:52:36.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9302" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":159,"skipped":2718,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:52:36.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5137 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5137 I0507 00:52:36.791372 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5137, replica count: 2 I0507 00:52:39.841790 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:52:42.842042 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 00:52:42.842: INFO: Creating new exec pod May 7 00:52:47.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5137 execpodlt5m8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 7 00:52:53.023: INFO: stderr: "I0507 00:52:52.930759 2088 log.go:172] (0xc0000e6370) (0xc00036a820) Create stream\nI0507 00:52:52.930802 2088 log.go:172] (0xc0000e6370) (0xc00036a820) Stream added, broadcasting: 1\nI0507 00:52:52.933995 2088 log.go:172] (0xc0000e6370) Reply frame received for 1\nI0507 00:52:52.934048 2088 log.go:172] (0xc0000e6370) (0xc00041d0e0) Create stream\nI0507 00:52:52.934062 2088 log.go:172] (0xc0000e6370) (0xc00041d0e0) Stream added, broadcasting: 3\nI0507 00:52:52.935090 2088 log.go:172] (0xc0000e6370) Reply frame received for 3\nI0507 00:52:52.935151 2088 log.go:172] (0xc0000e6370) (0xc0003fc280) Create stream\nI0507 00:52:52.935173 2088 log.go:172] (0xc0000e6370) (0xc0003fc280) Stream added, broadcasting: 5\nI0507 00:52:52.936160 2088 log.go:172] (0xc0000e6370) Reply frame received for 5\nI0507 00:52:53.014391 2088 log.go:172] (0xc0000e6370) Data frame received for 5\nI0507 00:52:53.014416 2088 log.go:172] (0xc0003fc280) (5) Data frame handling\nI0507 00:52:53.014430 2088 log.go:172] (0xc0003fc280) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0507 00:52:53.015082 2088 log.go:172] (0xc0000e6370) Data frame received for 5\nI0507 00:52:53.015118 2088 log.go:172] (0xc0003fc280) (5) Data frame handling\nI0507 00:52:53.015154 2088 log.go:172] (0xc0003fc280) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0507 00:52:53.015305 2088 log.go:172] (0xc0000e6370) Data frame received for 3\nI0507 00:52:53.015340 2088 log.go:172] (0xc00041d0e0) (3) Data frame handling\nI0507 00:52:53.015383 2088 log.go:172] (0xc0000e6370) Data frame received for 5\nI0507 00:52:53.015404 2088 log.go:172] (0xc0003fc280) (5) Data frame handling\nI0507 00:52:53.017430 2088 log.go:172] (0xc0000e6370) Data frame received for 1\nI0507 00:52:53.017456 2088 log.go:172] (0xc00036a820) (1) Data frame handling\nI0507 00:52:53.017485 2088 log.go:172] (0xc00036a820) (1) Data frame sent\nI0507 00:52:53.017513 2088 log.go:172] (0xc0000e6370) (0xc00036a820) Stream removed, broadcasting: 1\nI0507 00:52:53.017605 2088 log.go:172] (0xc0000e6370) Go away received\nI0507 00:52:53.018038 2088 log.go:172] (0xc0000e6370) (0xc00036a820) Stream removed, broadcasting: 1\nI0507 00:52:53.018063 2088 log.go:172] (0xc0000e6370) (0xc00041d0e0) Stream removed, broadcasting: 3\nI0507 00:52:53.018077 2088 log.go:172] (0xc0000e6370) (0xc0003fc280) Stream removed, broadcasting: 5\n" May 7 00:52:53.023: INFO: stdout: "" May 7 00:52:53.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5137 execpodlt5m8 -- /bin/sh -x -c nc -zv -t -w 2 10.107.253.39 80' May 7 00:52:53.272: INFO: stderr: "I0507 00:52:53.163795 2123 log.go:172] (0xc000bf7080) (0xc00083cf00) Create stream\nI0507 00:52:53.163894 2123 log.go:172] (0xc000bf7080) (0xc00083cf00) Stream added, broadcasting: 1\nI0507 00:52:53.168918 2123 log.go:172] (0xc000bf7080) Reply frame received for 1\nI0507 00:52:53.168974 2123 log.go:172] (0xc000bf7080) (0xc000837540) Create stream\nI0507 00:52:53.168989 2123 log.go:172] (0xc000bf7080) (0xc000837540) Stream added, broadcasting: 3\nI0507 00:52:53.170006 2123 log.go:172] (0xc000bf7080) Reply frame received for 3\nI0507 00:52:53.170044 2123 log.go:172] (0xc000bf7080) (0xc000542dc0) Create stream\nI0507 00:52:53.170056 2123 log.go:172] (0xc000bf7080) (0xc000542dc0) Stream added, broadcasting: 5\nI0507 00:52:53.170829 2123 log.go:172] (0xc000bf7080) Reply frame received for 5\nI0507 00:52:53.265667 2123 log.go:172] (0xc000bf7080) Data frame received for 3\nI0507 00:52:53.265699 2123 log.go:172] (0xc000837540) (3) Data frame handling\nI0507 00:52:53.265830 2123 log.go:172] (0xc000bf7080) Data frame received for 5\nI0507 00:52:53.265848 2123 log.go:172] (0xc000542dc0) (5) Data frame handling\nI0507 00:52:53.265874 2123 log.go:172] (0xc000542dc0) (5) Data frame sent\nI0507 00:52:53.265887 2123 log.go:172] (0xc000bf7080) Data frame received for 5\nI0507 00:52:53.265893 2123 log.go:172] (0xc000542dc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.253.39 80\nConnection to 10.107.253.39 80 port [tcp/http] succeeded!\nI0507 00:52:53.267168 2123 log.go:172] (0xc000bf7080) Data frame received for 1\nI0507 00:52:53.267183 2123 log.go:172] (0xc00083cf00) (1) Data frame handling\nI0507 00:52:53.267200 2123 log.go:172] (0xc00083cf00) (1) Data frame sent\nI0507 00:52:53.267287 2123 log.go:172] (0xc000bf7080) (0xc00083cf00) Stream removed, broadcasting: 1\nI0507 00:52:53.267333 2123 log.go:172] (0xc000bf7080) Go away received\nI0507 00:52:53.267619 2123 log.go:172] (0xc000bf7080) (0xc00083cf00) Stream removed, broadcasting: 1\nI0507 00:52:53.267641 2123 log.go:172] (0xc000bf7080) (0xc000837540) Stream removed, broadcasting: 3\nI0507 00:52:53.267653 2123 log.go:172] (0xc000bf7080) (0xc000542dc0) Stream removed, broadcasting: 5\n" May 7 00:52:53.272: INFO: stdout: "" May 7 00:52:53.272: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:52:53.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5137" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:17.026 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":160,"skipped":2758,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:52:53.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 7 00:52:53.421: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:52:53.479: INFO: Number of nodes with available pods: 0 May 7 00:52:53.479: INFO: Node latest-worker is running more than one daemon pod May 7 00:52:54.484: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:52:54.488: INFO: Number of nodes with available pods: 0 May 7 00:52:54.488: INFO: Node latest-worker is running more than one daemon pod May 7 00:52:55.894: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:52:56.192: INFO: Number of nodes with available pods: 0 May 7 00:52:56.193: INFO: Node latest-worker is running more than one daemon pod May 7 00:52:56.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:52:56.605: INFO: Number of nodes with available pods: 0 May 7 00:52:56.605: INFO: Node latest-worker is running more than one daemon pod May 7 00:52:57.484: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:52:57.486: INFO: Number of nodes with available pods: 1 May 7 00:52:57.486: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:52:58.493: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:52:58.503: INFO: Number of nodes with available pods: 2 May 7 00:52:58.503: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 7 00:52:58.594: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:52:58.605: INFO: Number of nodes with available pods: 1 May 7 00:52:58.605: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:52:59.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:52:59.614: INFO: Number of nodes with available pods: 1 May 7 00:52:59.614: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:00.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:53:00.615: INFO: Number of nodes with available pods: 1 May 7 00:53:00.615: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:01.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:53:01.614: INFO: Number of nodes with available pods: 1 May 7 00:53:01.614: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:02.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:53:02.615: INFO: Number of nodes with available pods: 1 May 7 00:53:02.615: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:03.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:53:03.614: INFO: Number of nodes with available pods: 1 May 7 00:53:03.615: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:04.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:53:04.614: INFO: Number of nodes with available pods: 1 May 7 00:53:04.614: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:05.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:53:05.615: INFO: Number of nodes with available pods: 1 May 7 00:53:05.615: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:06.615: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 00:53:06.619: INFO: Number of nodes with available pods: 2 May 7 00:53:06.619: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9549, will wait for the garbage collector to delete the pods May 7 00:53:06.682: INFO: Deleting DaemonSet.extensions daemon-set took: 7.39577ms May 7 00:53:06.783: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.279987ms May 7 00:53:15.287: INFO: Number of nodes with available pods: 0 May 7 00:53:15.287: INFO: Number of running nodes: 0, number of available pods: 0 May 7 00:53:15.317: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9549/daemonsets","resourceVersion":"2176293"},"items":null} May 7 00:53:15.320: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9549/pods","resourceVersion":"2176293"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:53:15.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9549" for this suite. • [SLOW TEST:22.024 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":161,"skipped":2809,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:53:15.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 7 00:53:15.483: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:53:22.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4557" for this suite. • [SLOW TEST:7.213 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":162,"skipped":2814,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:53:22.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:53:27.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3778" for this suite. • [SLOW TEST:5.170 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":163,"skipped":2829,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:53:27.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:53:27.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9803179a-b93b-4725-a7c2-5cac432ff7b1" in namespace "projected-6761" to be "Succeeded or Failed" May 7 00:53:27.879: INFO: Pod "downwardapi-volume-9803179a-b93b-4725-a7c2-5cac432ff7b1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.703516ms May 7 00:53:29.883: INFO: Pod "downwardapi-volume-9803179a-b93b-4725-a7c2-5cac432ff7b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039370622s May 7 00:53:31.888: INFO: Pod "downwardapi-volume-9803179a-b93b-4725-a7c2-5cac432ff7b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043625777s STEP: Saw pod success May 7 00:53:31.888: INFO: Pod "downwardapi-volume-9803179a-b93b-4725-a7c2-5cac432ff7b1" satisfied condition "Succeeded or Failed" May 7 00:53:31.891: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9803179a-b93b-4725-a7c2-5cac432ff7b1 container client-container: STEP: delete the pod May 7 00:53:31.941: INFO: Waiting for pod downwardapi-volume-9803179a-b93b-4725-a7c2-5cac432ff7b1 to disappear May 7 00:53:31.953: INFO: Pod downwardapi-volume-9803179a-b93b-4725-a7c2-5cac432ff7b1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:53:31.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6761" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2833,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:53:31.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-8eff21ae-90e2-44bc-a30a-add0d19af556 STEP: Creating a pod to test consume configMaps May 7 00:53:32.038: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb8b608d-42d0-46a2-b141-586fc336b4f3" in namespace "configmap-6442" to be "Succeeded or Failed" May 7 00:53:32.073: INFO: Pod "pod-configmaps-eb8b608d-42d0-46a2-b141-586fc336b4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 34.496799ms May 7 00:53:34.077: INFO: Pod "pod-configmaps-eb8b608d-42d0-46a2-b141-586fc336b4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038698832s May 7 00:53:36.081: INFO: Pod "pod-configmaps-eb8b608d-42d0-46a2-b141-586fc336b4f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042985843s STEP: Saw pod success May 7 00:53:36.081: INFO: Pod "pod-configmaps-eb8b608d-42d0-46a2-b141-586fc336b4f3" satisfied condition "Succeeded or Failed" May 7 00:53:36.084: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-eb8b608d-42d0-46a2-b141-586fc336b4f3 container configmap-volume-test: STEP: delete the pod May 7 00:53:36.136: INFO: Waiting for pod pod-configmaps-eb8b608d-42d0-46a2-b141-586fc336b4f3 to disappear May 7 00:53:36.153: INFO: Pod pod-configmaps-eb8b608d-42d0-46a2-b141-586fc336b4f3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:53:36.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6442" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2851,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:53:36.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-2a0069af-df4f-4d2d-9c8d-54398d44d436 STEP: Creating a pod to test consume secrets May 7 00:53:36.362: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b8711281-574e-4bf2-b8ab-f47cf140dd0f" in namespace "projected-7459" to be "Succeeded or Failed" May 7 00:53:36.368: INFO: Pod "pod-projected-secrets-b8711281-574e-4bf2-b8ab-f47cf140dd0f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.99432ms May 7 00:53:38.372: INFO: Pod "pod-projected-secrets-b8711281-574e-4bf2-b8ab-f47cf140dd0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009543924s May 7 00:53:40.376: INFO: Pod "pod-projected-secrets-b8711281-574e-4bf2-b8ab-f47cf140dd0f": Phase="Running", Reason="", readiness=true. Elapsed: 4.013796155s May 7 00:53:42.380: INFO: Pod "pod-projected-secrets-b8711281-574e-4bf2-b8ab-f47cf140dd0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017990018s STEP: Saw pod success May 7 00:53:42.380: INFO: Pod "pod-projected-secrets-b8711281-574e-4bf2-b8ab-f47cf140dd0f" satisfied condition "Succeeded or Failed" May 7 00:53:42.383: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-b8711281-574e-4bf2-b8ab-f47cf140dd0f container secret-volume-test: STEP: delete the pod May 7 00:53:42.431: INFO: Waiting for pod pod-projected-secrets-b8711281-574e-4bf2-b8ab-f47cf140dd0f to disappear May 7 00:53:42.439: INFO: Pod pod-projected-secrets-b8711281-574e-4bf2-b8ab-f47cf140dd0f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:53:42.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7459" for this suite. • [SLOW TEST:6.283 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":166,"skipped":2864,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:53:42.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 7 00:53:42.531: INFO: >>> kubeConfig: /root/.kube/config May 7 00:53:44.498: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:53:55.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1678" for this suite. • [SLOW TEST:12.771 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":167,"skipped":2864,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:53:55.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:53:55.300: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:53:55.311: INFO: Waiting for terminating namespaces to be deleted... May 7 00:53:55.313: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 7 00:53:55.317: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 00:53:55.317: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:53:55.317: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 00:53:55.317: INFO: Container kube-proxy ready: true, restart count 0 May 7 00:53:55.317: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 7 00:53:55.322: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 00:53:55.322: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:53:55.322: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 00:53:55.322: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c98db1a3fbd52], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c98db1cb7a7af], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:53:56.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9356" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":168,"skipped":2873,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:53:56.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:53:56.425: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 7 00:53:56.461: INFO: Number of nodes with available pods: 0 May 7 00:53:56.461: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 7 00:53:56.530: INFO: Number of nodes with available pods: 0 May 7 00:53:56.530: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:57.533: INFO: Number of nodes with available pods: 0 May 7 00:53:57.533: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:58.535: INFO: Number of nodes with available pods: 0 May 7 00:53:58.535: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:53:59.535: INFO: Number of nodes with available pods: 0 May 7 00:53:59.535: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:54:00.575: INFO: Number of nodes with available pods: 1 May 7 00:54:00.575: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 7 00:54:00.619: INFO: Number of nodes with available pods: 1 May 7 00:54:00.619: INFO: Number of running nodes: 0, number of available pods: 1 May 7 00:54:01.623: INFO: Number of nodes with available pods: 0 May 7 00:54:01.623: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 7 00:54:01.677: INFO: Number of nodes with available pods: 0 May 7 00:54:01.677: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:54:02.743: INFO: Number of nodes with available pods: 0 May 7 00:54:02.743: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:54:03.714: INFO: Number of nodes with available pods: 0 May 7 00:54:03.714: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:54:04.682: INFO: Number of nodes with available pods: 0 May 7 00:54:04.682: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:54:05.744: INFO: Number of nodes with available pods: 0 May 7 00:54:05.744: INFO: Node latest-worker2 is running more than one daemon pod May 7 00:54:06.700: INFO: Number of nodes with available pods: 1 May 7 00:54:06.700: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4939, will wait for the garbage collector to delete the pods May 7 00:54:06.774: INFO: Deleting DaemonSet.extensions daemon-set took: 17.590787ms May 7 00:54:07.075: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.251926ms May 7 00:54:15.283: INFO: Number of nodes with available pods: 0 May 7 00:54:15.283: INFO: Number of running nodes: 0, number of available pods: 0 May 7 00:54:15.286: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4939/daemonsets","resourceVersion":"2176727"},"items":null} May 7 00:54:15.288: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4939/pods","resourceVersion":"2176727"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:54:15.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4939" for this suite. • [SLOW TEST:19.002 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":169,"skipped":2874,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:54:15.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 7 00:54:15.470: INFO: Waiting up to 5m0s for pod "pod-5d826265-47b7-4956-afcc-8a86ee7a6c6c" in namespace "emptydir-5942" to be "Succeeded or Failed" May 7 00:54:15.476: INFO: Pod "pod-5d826265-47b7-4956-afcc-8a86ee7a6c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.768702ms May 7 00:54:17.683: INFO: Pod "pod-5d826265-47b7-4956-afcc-8a86ee7a6c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213519587s May 7 00:54:19.707: INFO: Pod "pod-5d826265-47b7-4956-afcc-8a86ee7a6c6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.23746402s STEP: Saw pod success May 7 00:54:19.707: INFO: Pod "pod-5d826265-47b7-4956-afcc-8a86ee7a6c6c" satisfied condition "Succeeded or Failed" May 7 00:54:19.710: INFO: Trying to get logs from node latest-worker2 pod pod-5d826265-47b7-4956-afcc-8a86ee7a6c6c container test-container: STEP: delete the pod May 7 00:54:19.746: INFO: Waiting for pod pod-5d826265-47b7-4956-afcc-8a86ee7a6c6c to disappear May 7 00:54:19.757: INFO: Pod pod-5d826265-47b7-4956-afcc-8a86ee7a6c6c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:54:19.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5942" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":170,"skipped":2889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:54:19.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3393 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3393 STEP: Deleting pre-stop pod May 7 00:54:33.282: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:54:33.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3393" for this suite. • [SLOW TEST:13.510 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":171,"skipped":2936,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:54:33.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 7 00:54:43.281: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 00:54:43.309: INFO: Pod pod-with-poststart-exec-hook still exists May 7 00:54:45.310: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 00:54:45.314: INFO: Pod pod-with-poststart-exec-hook still exists May 7 00:54:47.310: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 00:54:47.314: INFO: Pod pod-with-poststart-exec-hook still exists May 7 00:54:49.310: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 00:54:49.314: INFO: Pod pod-with-poststart-exec-hook still exists May 7 00:54:51.310: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 00:54:51.314: INFO: Pod pod-with-poststart-exec-hook still exists May 7 00:54:53.310: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 00:54:53.314: INFO: Pod pod-with-poststart-exec-hook still exists May 7 00:54:55.310: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 00:54:55.314: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:54:55.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2925" for this suite. • [SLOW TEST:21.994 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":2939,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:54:55.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:54:55.428: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ca8ab397-93c4-4a5f-a988-50e089211eb2" in namespace "security-context-test-1256" to be "Succeeded or Failed" May 7 00:54:55.437: INFO: Pod "busybox-user-65534-ca8ab397-93c4-4a5f-a988-50e089211eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.662987ms May 7 00:54:57.440: INFO: Pod "busybox-user-65534-ca8ab397-93c4-4a5f-a988-50e089211eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011371316s May 7 00:54:59.443: INFO: Pod "busybox-user-65534-ca8ab397-93c4-4a5f-a988-50e089211eb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014765394s May 7 00:54:59.443: INFO: Pod "busybox-user-65534-ca8ab397-93c4-4a5f-a988-50e089211eb2" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:54:59.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1256" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":2950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:54:59.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:55:15.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6458" for this suite. • [SLOW TEST:16.128 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":174,"skipped":3071,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:55:15.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 7 00:55:15.716: INFO: Waiting up to 5m0s for pod "pod-6a72bacf-82a6-42e0-b5f4-89598a849ba0" in namespace "emptydir-4034" to be "Succeeded or Failed" May 7 00:55:15.755: INFO: Pod "pod-6a72bacf-82a6-42e0-b5f4-89598a849ba0": Phase="Pending", Reason="", readiness=false. Elapsed: 39.181221ms May 7 00:55:18.047: INFO: Pod "pod-6a72bacf-82a6-42e0-b5f4-89598a849ba0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330372212s May 7 00:55:20.062: INFO: Pod "pod-6a72bacf-82a6-42e0-b5f4-89598a849ba0": Phase="Running", Reason="", readiness=true. Elapsed: 4.3458399s May 7 00:55:22.154: INFO: Pod "pod-6a72bacf-82a6-42e0-b5f4-89598a849ba0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.43754081s STEP: Saw pod success May 7 00:55:22.154: INFO: Pod "pod-6a72bacf-82a6-42e0-b5f4-89598a849ba0" satisfied condition "Succeeded or Failed" May 7 00:55:22.156: INFO: Trying to get logs from node latest-worker2 pod pod-6a72bacf-82a6-42e0-b5f4-89598a849ba0 container test-container: STEP: delete the pod May 7 00:55:22.410: INFO: Waiting for pod pod-6a72bacf-82a6-42e0-b5f4-89598a849ba0 to disappear May 7 00:55:22.435: INFO: Pod pod-6a72bacf-82a6-42e0-b5f4-89598a849ba0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:55:22.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4034" for this suite. • [SLOW TEST:6.914 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":175,"skipped":3077,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:55:22.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:55:28.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4813" for this suite. • [SLOW TEST:6.674 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":176,"skipped":3098,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:55:29.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:55:29.821: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:55:31.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409729, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409729, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409730, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409729, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:55:34.869: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 7 00:55:34.897: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:55:34.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7257" for this suite. STEP: Destroying namespace "webhook-7257-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.924 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":177,"skipped":3099,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:55:35.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-fb52cb03-6e1b-47e7-8ab7-234b08934d08 STEP: Creating a pod to test consume configMaps May 7 00:55:35.201: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-95c94ef9-b152-4817-acee-21424181707d" in namespace "projected-4012" to be "Succeeded or Failed" May 7 00:55:35.205: INFO: Pod "pod-projected-configmaps-95c94ef9-b152-4817-acee-21424181707d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381598ms May 7 00:55:37.313: INFO: Pod "pod-projected-configmaps-95c94ef9-b152-4817-acee-21424181707d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111977139s May 7 00:55:39.436: INFO: Pod "pod-projected-configmaps-95c94ef9-b152-4817-acee-21424181707d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.23556939s STEP: Saw pod success May 7 00:55:39.437: INFO: Pod "pod-projected-configmaps-95c94ef9-b152-4817-acee-21424181707d" satisfied condition "Succeeded or Failed" May 7 00:55:40.140: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-95c94ef9-b152-4817-acee-21424181707d container projected-configmap-volume-test: STEP: delete the pod May 7 00:55:40.203: INFO: Waiting for pod pod-projected-configmaps-95c94ef9-b152-4817-acee-21424181707d to disappear May 7 00:55:40.226: INFO: Pod pod-projected-configmaps-95c94ef9-b152-4817-acee-21424181707d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:55:40.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4012" for this suite. • [SLOW TEST:5.140 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":178,"skipped":3101,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:55:40.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 00:55:44.448: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:55:44.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3836" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":179,"skipped":3101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:55:44.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 7 00:55:44.596: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:55:59.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-540" for this suite. • [SLOW TEST:14.782 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":180,"skipped":3145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:55:59.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:55:59.946: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c0d3c1d-2c9c-43f1-9a23-a19b6f011d28" in namespace "downward-api-4949" to be "Succeeded or Failed" May 7 00:56:00.015: INFO: Pod "downwardapi-volume-0c0d3c1d-2c9c-43f1-9a23-a19b6f011d28": Phase="Pending", Reason="", readiness=false. Elapsed: 69.822651ms May 7 00:56:02.020: INFO: Pod "downwardapi-volume-0c0d3c1d-2c9c-43f1-9a23-a19b6f011d28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074687752s May 7 00:56:04.024: INFO: Pod "downwardapi-volume-0c0d3c1d-2c9c-43f1-9a23-a19b6f011d28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078465559s STEP: Saw pod success May 7 00:56:04.024: INFO: Pod "downwardapi-volume-0c0d3c1d-2c9c-43f1-9a23-a19b6f011d28" satisfied condition "Succeeded or Failed" May 7 00:56:04.027: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0c0d3c1d-2c9c-43f1-9a23-a19b6f011d28 container client-container: STEP: delete the pod May 7 00:56:04.058: INFO: Waiting for pod downwardapi-volume-0c0d3c1d-2c9c-43f1-9a23-a19b6f011d28 to disappear May 7 00:56:04.071: INFO: Pod downwardapi-volume-0c0d3c1d-2c9c-43f1-9a23-a19b6f011d28 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:56:04.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4949" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":181,"skipped":3200,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:56:04.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 7 00:56:04.384: INFO: Waiting up to 5m0s for pod "var-expansion-6c3c126c-9317-4017-8bc9-d71fbfc73fd3" in namespace "var-expansion-8752" to be "Succeeded or Failed" May 7 00:56:04.430: INFO: Pod "var-expansion-6c3c126c-9317-4017-8bc9-d71fbfc73fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 46.261911ms May 7 00:56:06.551: INFO: Pod "var-expansion-6c3c126c-9317-4017-8bc9-d71fbfc73fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166389343s May 7 00:56:08.555: INFO: Pod "var-expansion-6c3c126c-9317-4017-8bc9-d71fbfc73fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17087411s STEP: Saw pod success May 7 00:56:08.555: INFO: Pod "var-expansion-6c3c126c-9317-4017-8bc9-d71fbfc73fd3" satisfied condition "Succeeded or Failed" May 7 00:56:08.558: INFO: Trying to get logs from node latest-worker2 pod var-expansion-6c3c126c-9317-4017-8bc9-d71fbfc73fd3 container dapi-container: STEP: delete the pod May 7 00:56:08.614: INFO: Waiting for pod var-expansion-6c3c126c-9317-4017-8bc9-d71fbfc73fd3 to disappear May 7 00:56:08.636: INFO: Pod var-expansion-6c3c126c-9317-4017-8bc9-d71fbfc73fd3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:56:08.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8752" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":182,"skipped":3210,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:56:08.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 7 00:56:08.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-9585 -- logs-generator --log-lines-total 100 --run-duration 20s' May 7 00:56:08.827: INFO: stderr: "" May 7 00:56:08.827: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 7 00:56:08.827: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 7 00:56:08.827: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9585" to be "running and ready, or succeeded" May 7 00:56:08.844: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 16.814604ms May 7 00:56:10.858: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031252104s May 7 00:56:12.862: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.035632905s May 7 00:56:12.862: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 7 00:56:12.862: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 7 00:56:12.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9585' May 7 00:56:12.979: INFO: stderr: "" May 7 00:56:12.979: INFO: stdout: "I0507 00:56:11.220464 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/55kt 478\nI0507 00:56:11.420601 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/g48 329\nI0507 00:56:11.620646 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/fl5 408\nI0507 00:56:11.820650 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/wxpb 470\nI0507 00:56:12.020781 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/skg 207\nI0507 00:56:12.220656 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/ldd8 490\nI0507 00:56:12.420611 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/bdl 201\nI0507 00:56:12.620647 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/fxqf 405\nI0507 00:56:12.820670 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/kfcm 449\n" STEP: limiting log lines May 7 00:56:12.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9585 --tail=1' May 7 00:56:13.098: INFO: stderr: "" May 7 00:56:13.098: INFO: stdout: "I0507 00:56:13.020625 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/cjqp 276\n" May 7 00:56:13.099: INFO: got output "I0507 00:56:13.020625 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/cjqp 276\n" STEP: limiting log bytes May 7 00:56:13.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9585 --limit-bytes=1' May 7 00:56:13.220: INFO: stderr: "" May 7 00:56:13.220: INFO: stdout: "I" May 7 00:56:13.221: INFO: got output "I" STEP: exposing timestamps May 7 00:56:13.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9585 --tail=1 --timestamps' May 7 00:56:13.322: INFO: stderr: "" May 7 00:56:13.322: INFO: stdout: "2020-05-07T00:56:13.220750347Z I0507 00:56:13.220610 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/vnh 240\n" May 7 00:56:13.322: INFO: got output "2020-05-07T00:56:13.220750347Z I0507 00:56:13.220610 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/vnh 240\n" STEP: restricting to a time range May 7 00:56:15.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9585 --since=1s' May 7 00:56:15.940: INFO: stderr: "" May 7 00:56:15.940: INFO: stdout: "I0507 00:56:15.020638 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/kw5 232\nI0507 00:56:15.220662 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/d69 566\nI0507 00:56:15.420663 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/wrqt 542\nI0507 00:56:15.620636 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/vrg 552\nI0507 00:56:15.820665 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/crj 401\n" May 7 00:56:15.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9585 --since=24h' May 7 00:56:16.050: INFO: stderr: "" May 7 00:56:16.050: INFO: stdout: "I0507 00:56:11.220464 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/55kt 478\nI0507 00:56:11.420601 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/g48 329\nI0507 00:56:11.620646 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/fl5 408\nI0507 00:56:11.820650 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/wxpb 470\nI0507 00:56:12.020781 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/skg 207\nI0507 00:56:12.220656 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/ldd8 490\nI0507 00:56:12.420611 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/bdl 201\nI0507 00:56:12.620647 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/fxqf 405\nI0507 00:56:12.820670 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/kfcm 449\nI0507 00:56:13.020625 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/cjqp 276\nI0507 00:56:13.220610 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/vnh 240\nI0507 00:56:13.420666 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/rx77 246\nI0507 00:56:13.620632 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/xhk 351\nI0507 00:56:13.820667 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/d4w 369\nI0507 00:56:14.020630 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/cck 203\nI0507 00:56:14.220687 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/d45l 258\nI0507 00:56:14.420648 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/6bls 238\nI0507 00:56:14.620709 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/znt 348\nI0507 00:56:14.820623 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/kmt 589\nI0507 00:56:15.020638 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/kw5 232\nI0507 00:56:15.220662 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/d69 566\nI0507 00:56:15.420663 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/wrqt 542\nI0507 00:56:15.620636 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/vrg 552\nI0507 00:56:15.820665 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/crj 401\nI0507 00:56:16.020603 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/cqb 456\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 7 00:56:16.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9585' May 7 00:56:25.246: INFO: stderr: "" May 7 00:56:25.246: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:56:25.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9585" for this suite. • [SLOW TEST:16.611 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":183,"skipped":3210,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:56:25.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 7 00:56:30.454: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:56:30.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-368" for this suite. • [SLOW TEST:5.574 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":184,"skipped":3211,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:56:30.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 7 00:56:30.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 7 00:56:31.134: INFO: stderr: "" May 7 00:56:31.134: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:56:31.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8667" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":185,"skipped":3218,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:56:31.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-9838 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9838 to expose endpoints map[] May 7 00:56:31.788: INFO: Get endpoints failed (19.063943ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 7 00:56:32.792: INFO: successfully validated that service endpoint-test2 in namespace services-9838 exposes endpoints map[] (1.022815927s elapsed) STEP: Creating pod pod1 in namespace services-9838 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9838 to expose endpoints map[pod1:[80]] May 7 00:56:37.962: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.164414022s elapsed, will retry) May 7 00:56:39.149: INFO: successfully validated that service endpoint-test2 in namespace services-9838 exposes endpoints map[pod1:[80]] (6.351291567s elapsed) STEP: Creating pod pod2 in namespace services-9838 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9838 to expose endpoints map[pod1:[80] pod2:[80]] May 7 00:56:42.432: INFO: successfully validated that service endpoint-test2 in namespace services-9838 exposes endpoints map[pod1:[80] pod2:[80]] (3.200671106s elapsed) STEP: Deleting pod pod1 in namespace services-9838 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9838 to expose endpoints map[pod2:[80]] May 7 00:56:43.502: INFO: successfully validated that service endpoint-test2 in namespace services-9838 exposes endpoints map[pod2:[80]] (1.064649466s elapsed) STEP: Deleting pod pod2 in namespace services-9838 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9838 to expose endpoints map[] May 7 00:56:44.536: INFO: successfully validated that service endpoint-test2 in namespace services-9838 exposes endpoints map[] (1.030148412s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:56:44.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9838" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:13.583 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":186,"skipped":3231,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:56:44.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:56:48.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3988" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":187,"skipped":3234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:56:48.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 00:56:48.962: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8e9e153-0a19-4bc2-b2e4-f8cd29c28875" in namespace "projected-1356" to be "Succeeded or Failed" May 7 00:56:49.008: INFO: Pod "downwardapi-volume-b8e9e153-0a19-4bc2-b2e4-f8cd29c28875": Phase="Pending", Reason="", readiness=false. Elapsed: 45.870001ms May 7 00:56:51.020: INFO: Pod "downwardapi-volume-b8e9e153-0a19-4bc2-b2e4-f8cd29c28875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057996644s May 7 00:56:53.044: INFO: Pod "downwardapi-volume-b8e9e153-0a19-4bc2-b2e4-f8cd29c28875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082148417s STEP: Saw pod success May 7 00:56:53.044: INFO: Pod "downwardapi-volume-b8e9e153-0a19-4bc2-b2e4-f8cd29c28875" satisfied condition "Succeeded or Failed" May 7 00:56:53.048: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b8e9e153-0a19-4bc2-b2e4-f8cd29c28875 container client-container: STEP: delete the pod May 7 00:56:53.119: INFO: Waiting for pod downwardapi-volume-b8e9e153-0a19-4bc2-b2e4-f8cd29c28875 to disappear May 7 00:56:53.325: INFO: Pod downwardapi-volume-b8e9e153-0a19-4bc2-b2e4-f8cd29c28875 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:56:53.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1356" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":188,"skipped":3285,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:56:53.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 7 00:56:53.525: INFO: Waiting up to 5m0s for pod "downward-api-7c4c384d-0953-443b-b5bd-983facfc9610" in namespace "downward-api-1862" to be "Succeeded or Failed" May 7 00:56:53.888: INFO: Pod "downward-api-7c4c384d-0953-443b-b5bd-983facfc9610": Phase="Pending", Reason="", readiness=false. Elapsed: 363.488127ms May 7 00:56:55.894: INFO: Pod "downward-api-7c4c384d-0953-443b-b5bd-983facfc9610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369174771s May 7 00:56:57.899: INFO: Pod "downward-api-7c4c384d-0953-443b-b5bd-983facfc9610": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374170584s May 7 00:56:59.902: INFO: Pod "downward-api-7c4c384d-0953-443b-b5bd-983facfc9610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.377484819s STEP: Saw pod success May 7 00:56:59.902: INFO: Pod "downward-api-7c4c384d-0953-443b-b5bd-983facfc9610" satisfied condition "Succeeded or Failed" May 7 00:56:59.905: INFO: Trying to get logs from node latest-worker pod downward-api-7c4c384d-0953-443b-b5bd-983facfc9610 container dapi-container: STEP: delete the pod May 7 00:56:59.945: INFO: Waiting for pod downward-api-7c4c384d-0953-443b-b5bd-983facfc9610 to disappear May 7 00:56:59.959: INFO: Pod downward-api-7c4c384d-0953-443b-b5bd-983facfc9610 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:56:59.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1862" for this suite. • [SLOW TEST:6.547 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":189,"skipped":3291,"failed":0} S ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:56:59.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8770 STEP: creating service affinity-nodeport in namespace services-8770 STEP: creating replication controller affinity-nodeport in namespace services-8770 I0507 00:57:00.194352 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8770, replica count: 3 I0507 00:57:03.244759 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:57:06.244994 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 00:57:06.273: INFO: Creating new exec pod May 7 00:57:11.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8770 execpod-affinity7knzd -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 7 00:57:11.539: INFO: stderr: "I0507 00:57:11.444614 2325 log.go:172] (0xc0009780b0) (0xc0004d2dc0) Create stream\nI0507 00:57:11.444676 2325 log.go:172] (0xc0009780b0) (0xc0004d2dc0) Stream added, broadcasting: 1\nI0507 00:57:11.447264 2325 log.go:172] (0xc0009780b0) Reply frame received for 1\nI0507 00:57:11.447305 2325 log.go:172] (0xc0009780b0) (0xc00030a1e0) Create stream\nI0507 00:57:11.447315 2325 log.go:172] (0xc0009780b0) (0xc00030a1e0) Stream added, broadcasting: 3\nI0507 00:57:11.448550 2325 log.go:172] (0xc0009780b0) Reply frame received for 3\nI0507 00:57:11.448580 2325 log.go:172] (0xc0009780b0) (0xc0006c8640) Create stream\nI0507 00:57:11.448589 2325 log.go:172] (0xc0009780b0) (0xc0006c8640) Stream added, broadcasting: 5\nI0507 00:57:11.449758 2325 log.go:172] (0xc0009780b0) Reply frame received for 5\nI0507 00:57:11.530328 2325 log.go:172] (0xc0009780b0) Data frame received for 5\nI0507 00:57:11.530366 2325 log.go:172] (0xc0006c8640) (5) Data frame handling\nI0507 00:57:11.530386 2325 log.go:172] (0xc0006c8640) (5) Data frame sent\nI0507 00:57:11.530397 2325 log.go:172] (0xc0009780b0) Data frame received for 5\nI0507 00:57:11.530412 2325 log.go:172] (0xc0006c8640) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0507 00:57:11.530457 2325 log.go:172] (0xc0006c8640) (5) Data frame sent\nI0507 00:57:11.530599 2325 log.go:172] (0xc0009780b0) Data frame received for 5\nI0507 00:57:11.530622 2325 log.go:172] (0xc0006c8640) (5) Data frame handling\nI0507 00:57:11.531022 2325 log.go:172] (0xc0009780b0) Data frame received for 3\nI0507 00:57:11.531038 2325 log.go:172] (0xc00030a1e0) (3) Data frame handling\nI0507 00:57:11.533043 2325 log.go:172] (0xc0009780b0) Data frame received for 1\nI0507 00:57:11.533063 2325 log.go:172] (0xc0004d2dc0) (1) Data frame handling\nI0507 00:57:11.533070 2325 log.go:172] (0xc0004d2dc0) (1) Data frame sent\nI0507 00:57:11.533079 2325 log.go:172] (0xc0009780b0) (0xc0004d2dc0) Stream removed, broadcasting: 1\nI0507 00:57:11.533087 2325 log.go:172] (0xc0009780b0) Go away received\nI0507 00:57:11.533763 2325 log.go:172] (0xc0009780b0) (0xc0004d2dc0) Stream removed, broadcasting: 1\nI0507 00:57:11.533788 2325 log.go:172] (0xc0009780b0) (0xc00030a1e0) Stream removed, broadcasting: 3\nI0507 00:57:11.533810 2325 log.go:172] (0xc0009780b0) (0xc0006c8640) Stream removed, broadcasting: 5\n" May 7 00:57:11.539: INFO: stdout: "" May 7 00:57:11.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8770 execpod-affinity7knzd -- /bin/sh -x -c nc -zv -t -w 2 10.105.198.4 80' May 7 00:57:11.758: INFO: stderr: "I0507 00:57:11.674057 2346 log.go:172] (0xc000c3e9a0) (0xc0005d57c0) Create stream\nI0507 00:57:11.674103 2346 log.go:172] (0xc000c3e9a0) (0xc0005d57c0) Stream added, broadcasting: 1\nI0507 00:57:11.676401 2346 log.go:172] (0xc000c3e9a0) Reply frame received for 1\nI0507 00:57:11.676466 2346 log.go:172] (0xc000c3e9a0) (0xc0003a32c0) Create stream\nI0507 00:57:11.676495 2346 log.go:172] (0xc000c3e9a0) (0xc0003a32c0) Stream added, broadcasting: 3\nI0507 00:57:11.677525 2346 log.go:172] (0xc000c3e9a0) Reply frame received for 3\nI0507 00:57:11.677578 2346 log.go:172] (0xc000c3e9a0) (0xc0003a3b80) Create stream\nI0507 00:57:11.677606 2346 log.go:172] (0xc000c3e9a0) (0xc0003a3b80) Stream added, broadcasting: 5\nI0507 00:57:11.678552 2346 log.go:172] (0xc000c3e9a0) Reply frame received for 5\nI0507 00:57:11.751492 2346 log.go:172] (0xc000c3e9a0) Data frame received for 3\nI0507 00:57:11.751520 2346 log.go:172] (0xc0003a32c0) (3) Data frame handling\nI0507 00:57:11.751553 2346 log.go:172] (0xc000c3e9a0) Data frame received for 5\nI0507 00:57:11.751583 2346 log.go:172] (0xc0003a3b80) (5) Data frame handling\nI0507 00:57:11.751603 2346 log.go:172] (0xc0003a3b80) (5) Data frame sent\nI0507 00:57:11.751615 2346 log.go:172] (0xc000c3e9a0) Data frame received for 5\nI0507 00:57:11.751635 2346 log.go:172] (0xc0003a3b80) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.198.4 80\nConnection to 10.105.198.4 80 port [tcp/http] succeeded!\nI0507 00:57:11.752615 2346 log.go:172] (0xc000c3e9a0) Data frame received for 1\nI0507 00:57:11.752636 2346 log.go:172] (0xc0005d57c0) (1) Data frame handling\nI0507 00:57:11.752655 2346 log.go:172] (0xc0005d57c0) (1) Data frame sent\nI0507 00:57:11.752683 2346 log.go:172] (0xc000c3e9a0) (0xc0005d57c0) Stream removed, broadcasting: 1\nI0507 00:57:11.752703 2346 log.go:172] (0xc000c3e9a0) Go away received\nI0507 00:57:11.753476 2346 log.go:172] (0xc000c3e9a0) (0xc0005d57c0) Stream removed, broadcasting: 1\nI0507 00:57:11.753512 2346 log.go:172] (0xc000c3e9a0) (0xc0003a32c0) Stream removed, broadcasting: 3\nI0507 00:57:11.753529 2346 log.go:172] (0xc000c3e9a0) (0xc0003a3b80) Stream removed, broadcasting: 5\n" May 7 00:57:11.759: INFO: stdout: "" May 7 00:57:11.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8770 execpod-affinity7knzd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31911' May 7 00:57:12.005: INFO: stderr: "I0507 00:57:11.929019 2366 log.go:172] (0xc000ab5290) (0xc000ae43c0) Create stream\nI0507 00:57:11.929068 2366 log.go:172] (0xc000ab5290) (0xc000ae43c0) Stream added, broadcasting: 1\nI0507 00:57:11.932315 2366 log.go:172] (0xc000ab5290) Reply frame received for 1\nI0507 00:57:11.932350 2366 log.go:172] (0xc000ab5290) (0xc0006c2f00) Create stream\nI0507 00:57:11.932379 2366 log.go:172] (0xc000ab5290) (0xc0006c2f00) Stream added, broadcasting: 3\nI0507 00:57:11.933853 2366 log.go:172] (0xc000ab5290) Reply frame received for 3\nI0507 00:57:11.933894 2366 log.go:172] (0xc000ab5290) (0xc0009ea1e0) Create stream\nI0507 00:57:11.933915 2366 log.go:172] (0xc000ab5290) (0xc0009ea1e0) Stream added, broadcasting: 5\nI0507 00:57:11.934748 2366 log.go:172] (0xc000ab5290) Reply frame received for 5\nI0507 00:57:11.998982 2366 log.go:172] (0xc000ab5290) Data frame received for 5\nI0507 00:57:11.999010 2366 log.go:172] (0xc0009ea1e0) (5) Data frame handling\nI0507 00:57:11.999034 2366 log.go:172] (0xc0009ea1e0) (5) Data frame sent\nI0507 00:57:11.999044 2366 log.go:172] (0xc000ab5290) Data frame received for 5\nI0507 00:57:11.999050 2366 log.go:172] (0xc0009ea1e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31911\nConnection to 172.17.0.13 31911 port [tcp/31911] succeeded!\nI0507 00:57:11.999059 2366 log.go:172] (0xc000ab5290) Data frame received for 3\nI0507 00:57:11.999080 2366 log.go:172] (0xc0006c2f00) (3) Data frame handling\nI0507 00:57:12.001043 2366 log.go:172] (0xc000ab5290) Data frame received for 1\nI0507 00:57:12.001077 2366 log.go:172] (0xc000ae43c0) (1) Data frame handling\nI0507 00:57:12.001249 2366 log.go:172] (0xc000ae43c0) (1) Data frame sent\nI0507 00:57:12.001462 2366 log.go:172] (0xc000ab5290) (0xc000ae43c0) Stream removed, broadcasting: 1\nI0507 00:57:12.001550 2366 log.go:172] (0xc000ab5290) Go away received\nI0507 00:57:12.001729 2366 log.go:172] (0xc000ab5290) (0xc000ae43c0) Stream removed, broadcasting: 1\nI0507 00:57:12.001741 2366 log.go:172] (0xc000ab5290) (0xc0006c2f00) Stream removed, broadcasting: 3\nI0507 00:57:12.001747 2366 log.go:172] (0xc000ab5290) (0xc0009ea1e0) Stream removed, broadcasting: 5\n" May 7 00:57:12.006: INFO: stdout: "" May 7 00:57:12.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8770 execpod-affinity7knzd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31911' May 7 00:57:12.210: INFO: stderr: "I0507 00:57:12.131520 2386 log.go:172] (0xc0006ca210) (0xc0005a61e0) Create stream\nI0507 00:57:12.131592 2386 log.go:172] (0xc0006ca210) (0xc0005a61e0) Stream added, broadcasting: 1\nI0507 00:57:12.134008 2386 log.go:172] (0xc0006ca210) Reply frame received for 1\nI0507 00:57:12.134041 2386 log.go:172] (0xc0006ca210) (0xc000544d20) Create stream\nI0507 00:57:12.134049 2386 log.go:172] (0xc0006ca210) (0xc000544d20) Stream added, broadcasting: 3\nI0507 00:57:12.134830 2386 log.go:172] (0xc0006ca210) Reply frame received for 3\nI0507 00:57:12.134891 2386 log.go:172] (0xc0006ca210) (0xc0005a7180) Create stream\nI0507 00:57:12.134917 2386 log.go:172] (0xc0006ca210) (0xc0005a7180) Stream added, broadcasting: 5\nI0507 00:57:12.135772 2386 log.go:172] (0xc0006ca210) Reply frame received for 5\nI0507 00:57:12.202320 2386 log.go:172] (0xc0006ca210) Data frame received for 3\nI0507 00:57:12.202377 2386 log.go:172] (0xc000544d20) (3) Data frame handling\nI0507 00:57:12.202400 2386 log.go:172] (0xc0006ca210) Data frame received for 5\nI0507 00:57:12.202409 2386 log.go:172] (0xc0005a7180) (5) Data frame handling\nI0507 00:57:12.202419 2386 log.go:172] (0xc0005a7180) (5) Data frame sent\nI0507 00:57:12.202429 2386 log.go:172] (0xc0006ca210) Data frame received for 5\nI0507 00:57:12.202437 2386 log.go:172] (0xc0005a7180) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31911\nConnection to 172.17.0.12 31911 port [tcp/31911] succeeded!\nI0507 00:57:12.204136 2386 log.go:172] (0xc0006ca210) Data frame received for 1\nI0507 00:57:12.204165 2386 log.go:172] (0xc0005a61e0) (1) Data frame handling\nI0507 00:57:12.204194 2386 log.go:172] (0xc0005a61e0) (1) Data frame sent\nI0507 00:57:12.204216 2386 log.go:172] (0xc0006ca210) (0xc0005a61e0) Stream removed, broadcasting: 1\nI0507 00:57:12.204239 2386 log.go:172] (0xc0006ca210) Go away received\nI0507 00:57:12.204712 2386 log.go:172] (0xc0006ca210) (0xc0005a61e0) Stream removed, broadcasting: 1\nI0507 00:57:12.204736 2386 log.go:172] (0xc0006ca210) (0xc000544d20) Stream removed, broadcasting: 3\nI0507 00:57:12.204749 2386 log.go:172] (0xc0006ca210) (0xc0005a7180) Stream removed, broadcasting: 5\n" May 7 00:57:12.210: INFO: stdout: "" May 7 00:57:12.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8770 execpod-affinity7knzd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31911/ ; done' May 7 00:57:12.519: INFO: stderr: "I0507 00:57:12.357257 2407 log.go:172] (0xc00091e000) (0xc000810b40) Create stream\nI0507 00:57:12.357325 2407 log.go:172] (0xc00091e000) (0xc000810b40) Stream added, broadcasting: 1\nI0507 00:57:12.360214 2407 log.go:172] (0xc00091e000) Reply frame received for 1\nI0507 00:57:12.360244 2407 log.go:172] (0xc00091e000) (0xc000804dc0) Create stream\nI0507 00:57:12.360255 2407 log.go:172] (0xc00091e000) (0xc000804dc0) Stream added, broadcasting: 3\nI0507 00:57:12.361420 2407 log.go:172] (0xc00091e000) Reply frame received for 3\nI0507 00:57:12.361560 2407 log.go:172] (0xc00091e000) (0xc000800640) Create stream\nI0507 00:57:12.361573 2407 log.go:172] (0xc00091e000) (0xc000800640) Stream added, broadcasting: 5\nI0507 00:57:12.362793 2407 log.go:172] (0xc00091e000) Reply frame received for 5\nI0507 00:57:12.421752 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.421796 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.421824 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.421860 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.421870 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.421887 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.429793 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.429820 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.429838 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.430792 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.430814 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.430825 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.430840 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.430850 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.430859 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.435614 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.435669 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.435783 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.436066 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.436102 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.436130 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.436159 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.436176 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.436198 2407 log.go:172] (0xc000800640) (5) Data frame sent\nI0507 00:57:12.436216 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.436227 2407 log.go:172] (0xc000800640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.436286 2407 log.go:172] (0xc000800640) (5) Data frame sent\nI0507 00:57:12.443945 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.443966 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.443988 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.444695 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.444730 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.444745 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.444768 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.444782 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.444805 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.448943 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.448970 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.448982 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.449400 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.449422 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.449439 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.449538 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.449562 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.449603 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.454213 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.454232 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.454271 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.454846 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.454870 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.454880 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.454895 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.454905 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.454915 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.459407 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.459431 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.459455 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.460001 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.460020 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.460033 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.460104 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.460130 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.460147 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.467307 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.467343 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.467371 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.467950 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.467970 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.467989 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.468018 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.468036 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.468061 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.472121 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.472145 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.472164 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.473086 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.473314 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.473360 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.473384 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.473419 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.473450 2407 log.go:172] (0xc000800640) (5) Data frame sent\nI0507 00:57:12.473473 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.473491 2407 log.go:172] (0xc000800640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.473541 2407 log.go:172] (0xc000800640) (5) Data frame sent\nI0507 00:57:12.477038 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.477059 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.477086 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.477861 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.477891 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.477904 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.477922 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.477937 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.477949 2407 log.go:172] (0xc000800640) (5) Data frame sent\nI0507 00:57:12.477963 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.477975 2407 log.go:172] (0xc000800640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.478024 2407 log.go:172] (0xc000800640) (5) Data frame sent\nI0507 00:57:12.481604 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.481622 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.481639 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.481984 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.482011 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.482041 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.482083 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.482113 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.482145 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.485706 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.485731 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.485759 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.486130 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.486149 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.486172 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.486195 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.486206 2407 log.go:172] (0xc000800640) (5) Data frame sent\nI0507 00:57:12.486217 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.492262 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.492283 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.492301 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.492738 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.492759 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.492771 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.492791 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.492805 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.492829 2407 log.go:172] (0xc000800640) (5) Data frame sent\nI0507 00:57:12.492840 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.492885 2407 log.go:172] (0xc000800640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.492926 2407 log.go:172] (0xc000800640) (5) Data frame sent\nI0507 00:57:12.496281 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.496301 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.496311 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.496635 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.496663 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.496680 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\nI0507 00:57:12.496803 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.496849 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.496876 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.496911 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.496933 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.496964 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.500546 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.500565 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.500584 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.500940 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.500963 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.500977 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\n+ curl -q -sI0507 00:57:12.501008 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.501039 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.501063 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.501084 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.501106 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.501434 2407 log.go:172] (0xc000800640) (5) Data frame sent\n --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.505292 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.505319 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.505339 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.505852 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.505875 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.505888 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.505901 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.505911 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.505936 2407 log.go:172] (0xc000800640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31911/\nI0507 00:57:12.509829 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.509848 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.509865 2407 log.go:172] (0xc000804dc0) (3) Data frame sent\nI0507 00:57:12.510950 2407 log.go:172] (0xc00091e000) Data frame received for 5\nI0507 00:57:12.510976 2407 log.go:172] (0xc000800640) (5) Data frame handling\nI0507 00:57:12.511163 2407 log.go:172] (0xc00091e000) Data frame received for 3\nI0507 00:57:12.511176 2407 log.go:172] (0xc000804dc0) (3) Data frame handling\nI0507 00:57:12.512827 2407 log.go:172] (0xc00091e000) Data frame received for 1\nI0507 00:57:12.512852 2407 log.go:172] (0xc000810b40) (1) Data frame handling\nI0507 00:57:12.512875 2407 log.go:172] (0xc000810b40) (1) Data frame sent\nI0507 00:57:12.512907 2407 log.go:172] (0xc00091e000) (0xc000810b40) Stream removed, broadcasting: 1\nI0507 00:57:12.512931 2407 log.go:172] (0xc00091e000) Go away received\nI0507 00:57:12.513498 2407 log.go:172] (0xc00091e000) (0xc000810b40) Stream removed, broadcasting: 1\nI0507 00:57:12.513520 2407 log.go:172] (0xc00091e000) (0xc000804dc0) Stream removed, broadcasting: 3\nI0507 00:57:12.513540 2407 log.go:172] (0xc00091e000) (0xc000800640) Stream removed, broadcasting: 5\n" May 7 00:57:12.520: INFO: stdout: "\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw\naffinity-nodeport-msxhw" May 7 00:57:12.520: INFO: Received response from host: May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Received response from host: affinity-nodeport-msxhw May 7 00:57:12.520: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-8770, will wait for the garbage collector to delete the pods May 7 00:57:12.640: INFO: Deleting ReplicationController affinity-nodeport took: 7.532323ms May 7 00:57:13.040: INFO: Terminating ReplicationController affinity-nodeport pods took: 400.240021ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:57:25.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8770" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.625 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":190,"skipped":3292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:57:25.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 7 00:57:25.750: INFO: PodSpec: initContainers in spec.initContainers May 7 00:58:16.315: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-958f1a96-f713-41ec-84a2-71491c2d98ca", GenerateName:"", Namespace:"init-container-782", SelfLink:"/api/v1/namespaces/init-container-782/pods/pod-init-958f1a96-f713-41ec-84a2-71491c2d98ca", UID:"61614d6c-9314-436f-bc34-59df6e017228", ResourceVersion:"2178282", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724409845, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"750434758"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0026280c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0026280e0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002628100), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002628120)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ptvv6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004b5c000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ptvv6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ptvv6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ptvv6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00047a218), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003d0000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00047a660)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00047a690)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00047a698), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00047a69c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409845, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409845, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409845, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724409845, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.2.235", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.235"}}, StartTime:(*v1.Time)(0xc002628140), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003d0150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003d01c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ec5c501fbdbc18f1325770f888791534bbe648ffd2a0a300728314a3d48af96a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002628180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002628160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00047b8bf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:58:16.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-782" for this suite. • [SLOW TEST:50.773 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":191,"skipped":3319,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:58:16.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8888, will wait for the garbage collector to delete the pods May 7 00:58:22.542: INFO: Deleting Job.batch foo took: 6.673413ms May 7 00:58:22.943: INFO: Terminating Job.batch foo pods took: 400.425878ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:59:04.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8888" for this suite. • [SLOW TEST:48.592 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":192,"skipped":3327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:59:04.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 7 00:59:05.024: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:59:21.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4325" for this suite. • [SLOW TEST:17.047 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":193,"skipped":3366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:59:22.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:59:22.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9795" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":194,"skipped":3415,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:59:22.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 00:59:22.264: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:59:22.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1538" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":195,"skipped":3433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:59:22.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-8a966831-16b8-4a4d-a29e-29ed227015df STEP: Creating a pod to test consume configMaps May 7 00:59:22.993: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ca1329a-036e-4c17-b09f-c647dc85a661" in namespace "configmap-7999" to be "Succeeded or Failed" May 7 00:59:23.010: INFO: Pod "pod-configmaps-7ca1329a-036e-4c17-b09f-c647dc85a661": Phase="Pending", Reason="", readiness=false. Elapsed: 16.768522ms May 7 00:59:25.123: INFO: Pod "pod-configmaps-7ca1329a-036e-4c17-b09f-c647dc85a661": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130407366s May 7 00:59:27.126: INFO: Pod "pod-configmaps-7ca1329a-036e-4c17-b09f-c647dc85a661": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133092002s STEP: Saw pod success May 7 00:59:27.126: INFO: Pod "pod-configmaps-7ca1329a-036e-4c17-b09f-c647dc85a661" satisfied condition "Succeeded or Failed" May 7 00:59:27.128: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7ca1329a-036e-4c17-b09f-c647dc85a661 container configmap-volume-test: STEP: delete the pod May 7 00:59:27.154: INFO: Waiting for pod pod-configmaps-7ca1329a-036e-4c17-b09f-c647dc85a661 to disappear May 7 00:59:27.224: INFO: Pod pod-configmaps-7ca1329a-036e-4c17-b09f-c647dc85a661 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 00:59:27.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7999" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":196,"skipped":3469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 00:59:27.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-de3b99ad-8a92-478b-b192-e0982c3954d5 STEP: updating the pod May 7 00:59:37.896: INFO: Successfully updated pod "var-expansion-de3b99ad-8a92-478b-b192-e0982c3954d5" STEP: waiting for pod and container restart STEP: Failing liveness probe May 7 00:59:37.956: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-4551 PodName:var-expansion-de3b99ad-8a92-478b-b192-e0982c3954d5 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:59:37.956: INFO: >>> kubeConfig: /root/.kube/config I0507 00:59:37.989636 7 log.go:172] (0xc0036bb3f0) (0xc001e5bcc0) Create stream I0507 00:59:37.989664 7 log.go:172] (0xc0036bb3f0) (0xc001e5bcc0) Stream added, broadcasting: 1 I0507 00:59:37.991531 7 log.go:172] (0xc0036bb3f0) Reply frame received for 1 I0507 00:59:37.991593 7 log.go:172] (0xc0036bb3f0) (0xc0015e8d20) Create stream I0507 00:59:37.991622 7 log.go:172] (0xc0036bb3f0) (0xc0015e8d20) Stream added, broadcasting: 3 I0507 00:59:37.992697 7 log.go:172] (0xc0036bb3f0) Reply frame received for 3 I0507 00:59:37.992744 7 log.go:172] (0xc0036bb3f0) (0xc001256000) Create stream I0507 00:59:37.992764 7 log.go:172] (0xc0036bb3f0) (0xc001256000) Stream added, broadcasting: 5 I0507 00:59:37.993947 7 log.go:172] (0xc0036bb3f0) Reply frame received for 5 I0507 00:59:38.061853 7 log.go:172] (0xc0036bb3f0) Data frame received for 3 I0507 00:59:38.061898 7 log.go:172] (0xc0015e8d20) (3) Data frame handling I0507 00:59:38.061970 7 log.go:172] (0xc0036bb3f0) Data frame received for 5 I0507 00:59:38.062011 7 log.go:172] (0xc001256000) (5) Data frame handling I0507 00:59:38.063547 7 log.go:172] (0xc0036bb3f0) Data frame received for 1 I0507 00:59:38.063591 7 log.go:172] (0xc001e5bcc0) (1) Data frame handling I0507 00:59:38.063610 7 log.go:172] (0xc001e5bcc0) (1) Data frame sent I0507 00:59:38.063621 7 log.go:172] (0xc0036bb3f0) (0xc001e5bcc0) Stream removed, broadcasting: 1 I0507 00:59:38.063634 7 log.go:172] (0xc0036bb3f0) Go away received I0507 00:59:38.063744 7 log.go:172] (0xc0036bb3f0) (0xc001e5bcc0) Stream removed, broadcasting: 1 I0507 00:59:38.063768 7 log.go:172] (0xc0036bb3f0) (0xc0015e8d20) Stream removed, broadcasting: 3 I0507 00:59:38.063782 7 log.go:172] (0xc0036bb3f0) (0xc001256000) Stream removed, broadcasting: 5 May 7 00:59:38.063: INFO: Pod exec output: / STEP: Waiting for container to restart May 7 00:59:38.068: INFO: Container dapi-container, restarts: 0 May 7 00:59:48.073: INFO: Container dapi-container, restarts: 0 May 7 00:59:58.072: INFO: Container dapi-container, restarts: 0 May 7 01:00:08.073: INFO: Container dapi-container, restarts: 0 May 7 01:00:18.073: INFO: Container dapi-container, restarts: 1 May 7 01:00:18.073: INFO: Container has restart count: 1 STEP: Rewriting the file May 7 01:00:18.073: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-4551 PodName:var-expansion-de3b99ad-8a92-478b-b192-e0982c3954d5 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:00:18.073: INFO: >>> kubeConfig: /root/.kube/config I0507 01:00:18.107959 7 log.go:172] (0xc003534790) (0xc00154c0a0) Create stream I0507 01:00:18.107995 7 log.go:172] (0xc003534790) (0xc00154c0a0) Stream added, broadcasting: 1 I0507 01:00:18.110254 7 log.go:172] (0xc003534790) Reply frame received for 1 I0507 01:00:18.110284 7 log.go:172] (0xc003534790) (0xc001257220) Create stream I0507 01:00:18.110295 7 log.go:172] (0xc003534790) (0xc001257220) Stream added, broadcasting: 3 I0507 01:00:18.111098 7 log.go:172] (0xc003534790) Reply frame received for 3 I0507 01:00:18.111152 7 log.go:172] (0xc003534790) (0xc001e5bd60) Create stream I0507 01:00:18.111177 7 log.go:172] (0xc003534790) (0xc001e5bd60) Stream added, broadcasting: 5 I0507 01:00:18.111934 7 log.go:172] (0xc003534790) Reply frame received for 5 I0507 01:00:18.186104 7 log.go:172] (0xc003534790) Data frame received for 3 I0507 01:00:18.186158 7 log.go:172] (0xc001257220) (3) Data frame handling I0507 01:00:18.186260 7 log.go:172] (0xc003534790) Data frame received for 5 I0507 01:00:18.186287 7 log.go:172] (0xc001e5bd60) (5) Data frame handling I0507 01:00:18.187583 7 log.go:172] (0xc003534790) Data frame received for 1 I0507 01:00:18.187602 7 log.go:172] (0xc00154c0a0) (1) Data frame handling I0507 01:00:18.187615 7 log.go:172] (0xc00154c0a0) (1) Data frame sent I0507 01:00:18.187626 7 log.go:172] (0xc003534790) (0xc00154c0a0) Stream removed, broadcasting: 1 I0507 01:00:18.187660 7 log.go:172] (0xc003534790) Go away received I0507 01:00:18.187702 7 log.go:172] (0xc003534790) (0xc00154c0a0) Stream removed, broadcasting: 1 I0507 01:00:18.187712 7 log.go:172] (0xc003534790) (0xc001257220) Stream removed, broadcasting: 3 I0507 01:00:18.187718 7 log.go:172] (0xc003534790) (0xc001e5bd60) Stream removed, broadcasting: 5 May 7 01:00:18.187: INFO: Exec stderr: "" May 7 01:00:18.187: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 7 01:00:48.195: INFO: Container has restart count: 2 May 7 01:01:50.195: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 7 01:01:50.199: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-4551 PodName:var-expansion-de3b99ad-8a92-478b-b192-e0982c3954d5 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:01:50.199: INFO: >>> kubeConfig: /root/.kube/config I0507 01:01:50.235588 7 log.go:172] (0xc002d904d0) (0xc0015e9b80) Create stream I0507 01:01:50.235618 7 log.go:172] (0xc002d904d0) (0xc0015e9b80) Stream added, broadcasting: 1 I0507 01:01:50.237949 7 log.go:172] (0xc002d904d0) Reply frame received for 1 I0507 01:01:50.237988 7 log.go:172] (0xc002d904d0) (0xc0015e9c20) Create stream I0507 01:01:50.238005 7 log.go:172] (0xc002d904d0) (0xc0015e9c20) Stream added, broadcasting: 3 I0507 01:01:50.239032 7 log.go:172] (0xc002d904d0) Reply frame received for 3 I0507 01:01:50.239073 7 log.go:172] (0xc002d904d0) (0xc0015e9cc0) Create stream I0507 01:01:50.239090 7 log.go:172] (0xc002d904d0) (0xc0015e9cc0) Stream added, broadcasting: 5 I0507 01:01:50.240171 7 log.go:172] (0xc002d904d0) Reply frame received for 5 I0507 01:01:50.322008 7 log.go:172] (0xc002d904d0) Data frame received for 5 I0507 01:01:50.322066 7 log.go:172] (0xc0015e9cc0) (5) Data frame handling I0507 01:01:50.322104 7 log.go:172] (0xc002d904d0) Data frame received for 3 I0507 01:01:50.322122 7 log.go:172] (0xc0015e9c20) (3) Data frame handling I0507 01:01:50.323293 7 log.go:172] (0xc002d904d0) Data frame received for 1 I0507 01:01:50.323323 7 log.go:172] (0xc0015e9b80) (1) Data frame handling I0507 01:01:50.323340 7 log.go:172] (0xc0015e9b80) (1) Data frame sent I0507 01:01:50.323357 7 log.go:172] (0xc002d904d0) (0xc0015e9b80) Stream removed, broadcasting: 1 I0507 01:01:50.323442 7 log.go:172] (0xc002d904d0) (0xc0015e9b80) Stream removed, broadcasting: 1 I0507 01:01:50.323460 7 log.go:172] (0xc002d904d0) (0xc0015e9c20) Stream removed, broadcasting: 3 I0507 01:01:50.323845 7 log.go:172] (0xc002d904d0) Go away received I0507 01:01:50.323925 7 log.go:172] (0xc002d904d0) (0xc0015e9cc0) Stream removed, broadcasting: 5 May 7 01:01:50.327: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-4551 PodName:var-expansion-de3b99ad-8a92-478b-b192-e0982c3954d5 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:01:50.327: INFO: >>> kubeConfig: /root/.kube/config I0507 01:01:50.356749 7 log.go:172] (0xc00066e2c0) (0xc001421400) Create stream I0507 01:01:50.356781 7 log.go:172] (0xc00066e2c0) (0xc001421400) Stream added, broadcasting: 1 I0507 01:01:50.360798 7 log.go:172] (0xc00066e2c0) Reply frame received for 1 I0507 01:01:50.360869 7 log.go:172] (0xc00066e2c0) (0xc001421540) Create stream I0507 01:01:50.360914 7 log.go:172] (0xc00066e2c0) (0xc001421540) Stream added, broadcasting: 3 I0507 01:01:50.364756 7 log.go:172] (0xc00066e2c0) Reply frame received for 3 I0507 01:01:50.364795 7 log.go:172] (0xc00066e2c0) (0xc0015e9ea0) Create stream I0507 01:01:50.364811 7 log.go:172] (0xc00066e2c0) (0xc0015e9ea0) Stream added, broadcasting: 5 I0507 01:01:50.365839 7 log.go:172] (0xc00066e2c0) Reply frame received for 5 I0507 01:01:50.430796 7 log.go:172] (0xc00066e2c0) Data frame received for 3 I0507 01:01:50.430846 7 log.go:172] (0xc001421540) (3) Data frame handling I0507 01:01:50.430883 7 log.go:172] (0xc00066e2c0) Data frame received for 5 I0507 01:01:50.430902 7 log.go:172] (0xc0015e9ea0) (5) Data frame handling I0507 01:01:50.431915 7 log.go:172] (0xc00066e2c0) Data frame received for 1 I0507 01:01:50.431940 7 log.go:172] (0xc001421400) (1) Data frame handling I0507 01:01:50.431958 7 log.go:172] (0xc001421400) (1) Data frame sent I0507 01:01:50.431977 7 log.go:172] (0xc00066e2c0) (0xc001421400) Stream removed, broadcasting: 1 I0507 01:01:50.431996 7 log.go:172] (0xc00066e2c0) Go away received I0507 01:01:50.432104 7 log.go:172] (0xc00066e2c0) (0xc001421400) Stream removed, broadcasting: 1 I0507 01:01:50.432123 7 log.go:172] (0xc00066e2c0) (0xc001421540) Stream removed, broadcasting: 3 I0507 01:01:50.432137 7 log.go:172] (0xc00066e2c0) (0xc0015e9ea0) Stream removed, broadcasting: 5 May 7 01:01:50.432: INFO: Deleting pod "var-expansion-de3b99ad-8a92-478b-b192-e0982c3954d5" in namespace "var-expansion-4551" May 7 01:01:50.437: INFO: Wait up to 5m0s for pod "var-expansion-de3b99ad-8a92-478b-b192-e0982c3954d5" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:02:24.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4551" for this suite. • [SLOW TEST:177.228 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":197,"skipped":3493,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:02:24.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 7 01:02:24.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3759' May 7 01:02:24.961: INFO: stderr: "" May 7 01:02:24.961: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 7 01:02:26.066: INFO: Selector matched 1 pods for map[app:agnhost] May 7 01:02:26.066: INFO: Found 0 / 1 May 7 01:02:26.966: INFO: Selector matched 1 pods for map[app:agnhost] May 7 01:02:26.966: INFO: Found 0 / 1 May 7 01:02:27.967: INFO: Selector matched 1 pods for map[app:agnhost] May 7 01:02:27.967: INFO: Found 1 / 1 May 7 01:02:27.967: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 7 01:02:27.970: INFO: Selector matched 1 pods for map[app:agnhost] May 7 01:02:27.970: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 7 01:02:27.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-bwwfw --namespace=kubectl-3759 -p {"metadata":{"annotations":{"x":"y"}}}' May 7 01:02:28.083: INFO: stderr: "" May 7 01:02:28.083: INFO: stdout: "pod/agnhost-master-bwwfw patched\n" STEP: checking annotations May 7 01:02:28.155: INFO: Selector matched 1 pods for map[app:agnhost] May 7 01:02:28.155: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:02:28.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3759" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":198,"skipped":3495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:02:28.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 01:02:29.772: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 01:02:32.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410149, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410149, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410149, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410149, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 01:02:34.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410149, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410149, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410149, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410149, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 01:02:37.138: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:02:37.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8469" for this suite. STEP: Destroying namespace "webhook-8469-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.159 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":199,"skipped":3534,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:02:38.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:02:38.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3759" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3535,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:02:38.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 7 01:02:39.293: INFO: Waiting up to 5m0s for pod "pod-03638e45-0381-436a-9a64-e1f8549dba4b" in namespace "emptydir-8556" to be "Succeeded or Failed" May 7 01:02:39.296: INFO: Pod "pod-03638e45-0381-436a-9a64-e1f8549dba4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.871184ms May 7 01:02:41.323: INFO: Pod "pod-03638e45-0381-436a-9a64-e1f8549dba4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02962669s May 7 01:02:43.328: INFO: Pod "pod-03638e45-0381-436a-9a64-e1f8549dba4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035205328s May 7 01:02:45.333: INFO: Pod "pod-03638e45-0381-436a-9a64-e1f8549dba4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040201254s STEP: Saw pod success May 7 01:02:45.333: INFO: Pod "pod-03638e45-0381-436a-9a64-e1f8549dba4b" satisfied condition "Succeeded or Failed" May 7 01:02:45.337: INFO: Trying to get logs from node latest-worker2 pod pod-03638e45-0381-436a-9a64-e1f8549dba4b container test-container: STEP: delete the pod May 7 01:02:45.373: INFO: Waiting for pod pod-03638e45-0381-436a-9a64-e1f8549dba4b to disappear May 7 01:02:45.387: INFO: Pod pod-03638e45-0381-436a-9a64-e1f8549dba4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:02:45.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8556" for this suite. • [SLOW TEST:6.439 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":201,"skipped":3543,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:02:45.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:02:54.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3743" for this suite. • [SLOW TEST:8.703 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3556,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:02:54.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 01:02:55.339: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 01:02:57.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410175, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410175, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410175, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410175, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 01:03:00.588: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:03:00.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7587" for this suite. STEP: Destroying namespace "webhook-7587-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.971 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":203,"skipped":3571,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:03:01.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-60b637c3-94b8-4880-b08c-7eedf3da9557 STEP: Creating a pod to test consume secrets May 7 01:03:01.225: INFO: Waiting up to 5m0s for pod "pod-secrets-918780db-380e-46be-9d98-0906950c4790" in namespace "secrets-5234" to be "Succeeded or Failed" May 7 01:03:01.231: INFO: Pod "pod-secrets-918780db-380e-46be-9d98-0906950c4790": Phase="Pending", Reason="", readiness=false. Elapsed: 5.758139ms May 7 01:03:04.755: INFO: Pod "pod-secrets-918780db-380e-46be-9d98-0906950c4790": Phase="Pending", Reason="", readiness=false. Elapsed: 3.529629298s May 7 01:03:06.764: INFO: Pod "pod-secrets-918780db-380e-46be-9d98-0906950c4790": Phase="Pending", Reason="", readiness=false. Elapsed: 5.538554815s May 7 01:03:08.768: INFO: Pod "pod-secrets-918780db-380e-46be-9d98-0906950c4790": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.543036554s STEP: Saw pod success May 7 01:03:08.768: INFO: Pod "pod-secrets-918780db-380e-46be-9d98-0906950c4790" satisfied condition "Succeeded or Failed" May 7 01:03:08.772: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-918780db-380e-46be-9d98-0906950c4790 container secret-volume-test: STEP: delete the pod May 7 01:03:08.852: INFO: Waiting for pod pod-secrets-918780db-380e-46be-9d98-0906950c4790 to disappear May 7 01:03:08.879: INFO: Pod pod-secrets-918780db-380e-46be-9d98-0906950c4790 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:03:08.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5234" for this suite. • [SLOW TEST:7.817 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3584,"failed":0} [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:03:08.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:03:09.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-923" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":205,"skipped":3584,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:03:09.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:03:13.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8943" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":206,"skipped":3594,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:03:13.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-9a681cdc-a65c-4d99-828b-171276bb2fb9 in namespace container-probe-7909 May 7 01:03:18.223: INFO: Started pod busybox-9a681cdc-a65c-4d99-828b-171276bb2fb9 in namespace container-probe-7909 STEP: checking the pod's current state and verifying that restartCount is present May 7 01:03:18.225: INFO: Initial restart count of pod busybox-9a681cdc-a65c-4d99-828b-171276bb2fb9 is 0 May 7 01:04:12.341: INFO: Restart count of pod container-probe-7909/busybox-9a681cdc-a65c-4d99-828b-171276bb2fb9 is now 1 (54.115320739s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:04:12.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7909" for this suite. • [SLOW TEST:58.932 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":207,"skipped":3603,"failed":0} [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:04:12.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 7 01:04:12.502: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-a 1e234f70-e717-409d-a433-b7c98ea02b54 2179810 0 2020-05-07 01:04:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 7 01:04:12.502: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-a 1e234f70-e717-409d-a433-b7c98ea02b54 2179810 0 2020-05-07 01:04:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 7 01:04:22.510: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-a 1e234f70-e717-409d-a433-b7c98ea02b54 2179853 0 2020-05-07 01:04:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 7 01:04:22.510: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-a 1e234f70-e717-409d-a433-b7c98ea02b54 2179853 0 2020-05-07 01:04:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 7 01:04:32.518: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-a 1e234f70-e717-409d-a433-b7c98ea02b54 2179883 0 2020-05-07 01:04:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 7 01:04:32.519: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-a 1e234f70-e717-409d-a433-b7c98ea02b54 2179883 0 2020-05-07 01:04:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 7 01:04:42.526: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-a 1e234f70-e717-409d-a433-b7c98ea02b54 2179911 0 2020-05-07 01:04:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 7 01:04:42.526: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-a 1e234f70-e717-409d-a433-b7c98ea02b54 2179911 0 2020-05-07 01:04:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 7 01:04:52.535: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-b efcbb204-e9e8-462a-822f-3d2b25a10fbb 2179941 0 2020-05-07 01:04:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 7 01:04:52.535: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-b efcbb204-e9e8-462a-822f-3d2b25a10fbb 2179941 0 2020-05-07 01:04:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 7 01:05:02.547: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-b efcbb204-e9e8-462a-822f-3d2b25a10fbb 2179971 0 2020-05-07 01:04:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 7 01:05:02.547: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3470 /api/v1/namespaces/watch-3470/configmaps/e2e-watch-test-configmap-b efcbb204-e9e8-462a-822f-3d2b25a10fbb 2179971 0 2020-05-07 01:04:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-07 01:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:05:12.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3470" for this suite. • [SLOW TEST:60.169 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":208,"skipped":3603,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:05:12.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 01:05:12.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-231d4df0-e9de-4c19-8a8f-3656bccf5e27" in namespace "downward-api-9032" to be "Succeeded or Failed" May 7 01:05:12.722: INFO: Pod "downwardapi-volume-231d4df0-e9de-4c19-8a8f-3656bccf5e27": Phase="Pending", Reason="", readiness=false. Elapsed: 30.109685ms May 7 01:05:14.727: INFO: Pod "downwardapi-volume-231d4df0-e9de-4c19-8a8f-3656bccf5e27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035270328s May 7 01:05:16.732: INFO: Pod "downwardapi-volume-231d4df0-e9de-4c19-8a8f-3656bccf5e27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040398685s STEP: Saw pod success May 7 01:05:16.732: INFO: Pod "downwardapi-volume-231d4df0-e9de-4c19-8a8f-3656bccf5e27" satisfied condition "Succeeded or Failed" May 7 01:05:16.735: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-231d4df0-e9de-4c19-8a8f-3656bccf5e27 container client-container: STEP: delete the pod May 7 01:05:16.917: INFO: Waiting for pod downwardapi-volume-231d4df0-e9de-4c19-8a8f-3656bccf5e27 to disappear May 7 01:05:16.923: INFO: Pod downwardapi-volume-231d4df0-e9de-4c19-8a8f-3656bccf5e27 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:05:16.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9032" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":209,"skipped":3610,"failed":0} SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:05:16.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-8c99da9d-0ba9-4835-b599-264f0b16cf47 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:05:17.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-566" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":210,"skipped":3612,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:05:17.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 01:05:19.170: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 01:05:21.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410319, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410319, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410319, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410319, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 01:05:23.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410319, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410319, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410319, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410319, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 01:05:26.226: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:05:26.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9279" for this suite. STEP: Destroying namespace "webhook-9279-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.829 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":211,"skipped":3618,"failed":0} S ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:05:26.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8140 STEP: creating service affinity-clusterip-transition in namespace services-8140 STEP: creating replication controller affinity-clusterip-transition in namespace services-8140 I0507 01:05:26.954182 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8140, replica count: 3 I0507 01:05:30.004571 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 01:05:33.004817 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 01:05:33.012: INFO: Creating new exec pod May 7 01:05:38.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8140 execpod-affinitydnnwh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 7 01:05:43.796: INFO: stderr: "I0507 01:05:43.725809 2472 log.go:172] (0xc000b204d0) (0xc000830d20) Create stream\nI0507 01:05:43.725847 2472 log.go:172] (0xc000b204d0) (0xc000830d20) Stream added, broadcasting: 1\nI0507 01:05:43.727430 2472 log.go:172] (0xc000b204d0) Reply frame received for 1\nI0507 01:05:43.727464 2472 log.go:172] (0xc000b204d0) (0xc000831cc0) Create stream\nI0507 01:05:43.727472 2472 log.go:172] (0xc000b204d0) (0xc000831cc0) Stream added, broadcasting: 3\nI0507 01:05:43.728183 2472 log.go:172] (0xc000b204d0) Reply frame received for 3\nI0507 01:05:43.728211 2472 log.go:172] (0xc000b204d0) (0xc00082a5a0) Create stream\nI0507 01:05:43.728221 2472 log.go:172] (0xc000b204d0) (0xc00082a5a0) Stream added, broadcasting: 5\nI0507 01:05:43.728822 2472 log.go:172] (0xc000b204d0) Reply frame received for 5\nI0507 01:05:43.787303 2472 log.go:172] (0xc000b204d0) Data frame received for 5\nI0507 01:05:43.787364 2472 log.go:172] (0xc00082a5a0) (5) Data frame handling\nI0507 01:05:43.787397 2472 log.go:172] (0xc00082a5a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0507 01:05:43.787641 2472 log.go:172] (0xc000b204d0) Data frame received for 5\nI0507 01:05:43.787677 2472 log.go:172] (0xc00082a5a0) (5) Data frame handling\nI0507 01:05:43.787704 2472 log.go:172] (0xc00082a5a0) (5) Data frame sent\nI0507 01:05:43.787722 2472 log.go:172] (0xc000b204d0) Data frame received for 5\nI0507 01:05:43.787752 2472 log.go:172] (0xc00082a5a0) (5) Data frame handling\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0507 01:05:43.788112 2472 log.go:172] (0xc000b204d0) Data frame received for 3\nI0507 01:05:43.788129 2472 log.go:172] (0xc000831cc0) (3) Data frame handling\nI0507 01:05:43.790185 2472 log.go:172] (0xc000b204d0) Data frame received for 1\nI0507 01:05:43.790217 2472 log.go:172] (0xc000830d20) (1) Data frame handling\nI0507 01:05:43.790236 2472 log.go:172] (0xc000830d20) (1) Data frame sent\nI0507 01:05:43.790251 2472 log.go:172] (0xc000b204d0) (0xc000830d20) Stream removed, broadcasting: 1\nI0507 01:05:43.790268 2472 log.go:172] (0xc000b204d0) Go away received\nI0507 01:05:43.790607 2472 log.go:172] (0xc000b204d0) (0xc000830d20) Stream removed, broadcasting: 1\nI0507 01:05:43.790628 2472 log.go:172] (0xc000b204d0) (0xc000831cc0) Stream removed, broadcasting: 3\nI0507 01:05:43.790637 2472 log.go:172] (0xc000b204d0) (0xc00082a5a0) Stream removed, broadcasting: 5\n" May 7 01:05:43.796: INFO: stdout: "" May 7 01:05:43.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8140 execpod-affinitydnnwh -- /bin/sh -x -c nc -zv -t -w 2 10.108.251.54 80' May 7 01:05:44.008: INFO: stderr: "I0507 01:05:43.930952 2508 log.go:172] (0xc00003a790) (0xc0006e5360) Create stream\nI0507 01:05:43.930999 2508 log.go:172] (0xc00003a790) (0xc0006e5360) Stream added, broadcasting: 1\nI0507 01:05:43.933615 2508 log.go:172] (0xc00003a790) Reply frame received for 1\nI0507 01:05:43.933643 2508 log.go:172] (0xc00003a790) (0xc00056c460) Create stream\nI0507 01:05:43.933650 2508 log.go:172] (0xc00003a790) (0xc00056c460) Stream added, broadcasting: 3\nI0507 01:05:43.934658 2508 log.go:172] (0xc00003a790) Reply frame received for 3\nI0507 01:05:43.934682 2508 log.go:172] (0xc00003a790) (0xc0006e5400) Create stream\nI0507 01:05:43.934690 2508 log.go:172] (0xc00003a790) (0xc0006e5400) Stream added, broadcasting: 5\nI0507 01:05:43.935563 2508 log.go:172] (0xc00003a790) Reply frame received for 5\nI0507 01:05:44.000779 2508 log.go:172] (0xc00003a790) Data frame received for 3\nI0507 01:05:44.000819 2508 log.go:172] (0xc00003a790) Data frame received for 5\nI0507 01:05:44.000855 2508 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0507 01:05:44.000891 2508 log.go:172] (0xc0006e5400) (5) Data frame sent\nI0507 01:05:44.000920 2508 log.go:172] (0xc00003a790) Data frame received for 5\n+ nc -zv -t -w 2 10.108.251.54 80\nConnection to 10.108.251.54 80 port [tcp/http] succeeded!\nI0507 01:05:44.000944 2508 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0507 01:05:44.001008 2508 log.go:172] (0xc00056c460) (3) Data frame handling\nI0507 01:05:44.002252 2508 log.go:172] (0xc00003a790) Data frame received for 1\nI0507 01:05:44.002284 2508 log.go:172] (0xc0006e5360) (1) Data frame handling\nI0507 01:05:44.002303 2508 log.go:172] (0xc0006e5360) (1) Data frame sent\nI0507 01:05:44.002330 2508 log.go:172] (0xc00003a790) (0xc0006e5360) Stream removed, broadcasting: 1\nI0507 01:05:44.002355 2508 log.go:172] (0xc00003a790) Go away received\nI0507 01:05:44.002874 2508 log.go:172] (0xc00003a790) (0xc0006e5360) Stream removed, broadcasting: 1\nI0507 01:05:44.002901 2508 log.go:172] (0xc00003a790) (0xc00056c460) Stream removed, broadcasting: 3\nI0507 01:05:44.002921 2508 log.go:172] (0xc00003a790) (0xc0006e5400) Stream removed, broadcasting: 5\n" May 7 01:05:44.008: INFO: stdout: "" May 7 01:05:44.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8140 execpod-affinitydnnwh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.251.54:80/ ; done' May 7 01:05:44.332: INFO: stderr: "I0507 01:05:44.170419 2529 log.go:172] (0xc000856bb0) (0xc00031e0a0) Create stream\nI0507 01:05:44.170469 2529 log.go:172] (0xc000856bb0) (0xc00031e0a0) Stream added, broadcasting: 1\nI0507 01:05:44.176541 2529 log.go:172] (0xc000856bb0) Reply frame received for 1\nI0507 01:05:44.176574 2529 log.go:172] (0xc000856bb0) (0xc0003a54a0) Create stream\nI0507 01:05:44.176583 2529 log.go:172] (0xc000856bb0) (0xc0003a54a0) Stream added, broadcasting: 3\nI0507 01:05:44.178158 2529 log.go:172] (0xc000856bb0) Reply frame received for 3\nI0507 01:05:44.178182 2529 log.go:172] (0xc000856bb0) (0xc0003de0a0) Create stream\nI0507 01:05:44.178203 2529 log.go:172] (0xc000856bb0) (0xc0003de0a0) Stream added, broadcasting: 5\nI0507 01:05:44.178987 2529 log.go:172] (0xc000856bb0) Reply frame received for 5\nI0507 01:05:44.241012 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.241059 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.241101 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\nI0507 01:05:44.241290 2529 log.go:172] (0xc000856bb0) Data frame received for 3\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.241321 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.241337 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.245062 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.245084 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.245100 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.247186 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.247199 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.247209 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.247232 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.247255 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.247267 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.250420 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.250431 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.250442 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.250929 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.250943 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.250957 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.250964 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.250974 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.250980 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.255595 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.255612 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.255624 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.255983 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.255997 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.256004 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.256019 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.256038 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.256056 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.259456 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.259467 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.259473 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.259807 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.259819 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.259825 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\nI0507 01:05:44.259830 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.259834 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.259847 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.259859 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.259869 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.259878 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\nI0507 01:05:44.264027 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.264045 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.264058 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.264437 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.264459 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.264476 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.264508 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.264522 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.264531 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.268740 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.268757 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.268774 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.269017 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.269078 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.269254 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\nI0507 01:05:44.269309 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.269330 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.269345 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.274664 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.274690 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.274703 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.275245 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.275263 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.275271 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.275291 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.275317 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.275345 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.278782 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.278795 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.278809 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.279286 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.279308 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.279324 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.279351 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.279383 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.279400 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.284699 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.284713 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.284723 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.285317 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.285328 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.285340 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.285352 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.285360 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.285365 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.291267 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.291283 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.291298 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.292083 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.292108 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.292118 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.292135 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.292140 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.292148 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.297796 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.297813 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.297834 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.298241 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.298258 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.298270 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.298296 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.298304 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.298311 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.302708 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.302723 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.302729 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.302752 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.302762 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.302779 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.304033 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.304063 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.304113 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.307344 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.307356 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.307362 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.307890 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.307900 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.307905 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.307915 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.307930 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.307942 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\nI0507 01:05:44.307948 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.307954 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.307973 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\nI0507 01:05:44.313555 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.313581 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.313613 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.314100 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.314115 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.314125 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.314138 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.314145 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.314151 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.318915 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.318935 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.318953 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.319447 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.319486 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.319500 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\nI0507 01:05:44.319515 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.319531 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.319560 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.319581 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.319601 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.319617 2529 log.go:172] (0xc0003de0a0) (5) Data frame sent\nI0507 01:05:44.323991 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.324011 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.324027 2529 log.go:172] (0xc0003a54a0) (3) Data frame sent\nI0507 01:05:44.324504 2529 log.go:172] (0xc000856bb0) Data frame received for 3\nI0507 01:05:44.324514 2529 log.go:172] (0xc0003a54a0) (3) Data frame handling\nI0507 01:05:44.324630 2529 log.go:172] (0xc000856bb0) Data frame received for 5\nI0507 01:05:44.324641 2529 log.go:172] (0xc0003de0a0) (5) Data frame handling\nI0507 01:05:44.326612 2529 log.go:172] (0xc000856bb0) Data frame received for 1\nI0507 01:05:44.326626 2529 log.go:172] (0xc00031e0a0) (1) Data frame handling\nI0507 01:05:44.326632 2529 log.go:172] (0xc00031e0a0) (1) Data frame sent\nI0507 01:05:44.326640 2529 log.go:172] (0xc000856bb0) (0xc00031e0a0) Stream removed, broadcasting: 1\nI0507 01:05:44.326704 2529 log.go:172] (0xc000856bb0) Go away received\nI0507 01:05:44.326911 2529 log.go:172] (0xc000856bb0) (0xc00031e0a0) Stream removed, broadcasting: 1\nI0507 01:05:44.326934 2529 log.go:172] (0xc000856bb0) (0xc0003a54a0) Stream removed, broadcasting: 3\nI0507 01:05:44.326940 2529 log.go:172] (0xc000856bb0) (0xc0003de0a0) Stream removed, broadcasting: 5\n" May 7 01:05:44.332: INFO: stdout: "\naffinity-clusterip-transition-x8mlq\naffinity-clusterip-transition-x8mlq\naffinity-clusterip-transition-x8mlq\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-2tr9t\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-2tr9t\naffinity-clusterip-transition-2tr9t\naffinity-clusterip-transition-2tr9t\naffinity-clusterip-transition-x8mlq\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-x8mlq\naffinity-clusterip-transition-2tr9t\naffinity-clusterip-transition-2tr9t\naffinity-clusterip-transition-x8mlq" May 7 01:05:44.333: INFO: Received response from host: May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-x8mlq May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-x8mlq May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-x8mlq May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-2tr9t May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-2tr9t May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-2tr9t May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-2tr9t May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-x8mlq May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-x8mlq May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-2tr9t May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-2tr9t May 7 01:05:44.333: INFO: Received response from host: affinity-clusterip-transition-x8mlq May 7 01:05:44.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8140 execpod-affinitydnnwh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.251.54:80/ ; done' May 7 01:05:44.659: INFO: stderr: "I0507 01:05:44.474544 2549 log.go:172] (0xc000aa1760) (0xc000bb80a0) Create stream\nI0507 01:05:44.474608 2549 log.go:172] (0xc000aa1760) (0xc000bb80a0) Stream added, broadcasting: 1\nI0507 01:05:44.479336 2549 log.go:172] (0xc000aa1760) Reply frame received for 1\nI0507 01:05:44.479377 2549 log.go:172] (0xc000aa1760) (0xc000240780) Create stream\nI0507 01:05:44.479386 2549 log.go:172] (0xc000aa1760) (0xc000240780) Stream added, broadcasting: 3\nI0507 01:05:44.480273 2549 log.go:172] (0xc000aa1760) Reply frame received for 3\nI0507 01:05:44.480318 2549 log.go:172] (0xc000aa1760) (0xc00069a0a0) Create stream\nI0507 01:05:44.480331 2549 log.go:172] (0xc000aa1760) (0xc00069a0a0) Stream added, broadcasting: 5\nI0507 01:05:44.481555 2549 log.go:172] (0xc000aa1760) Reply frame received for 5\nI0507 01:05:44.567290 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.567337 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.567352 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.567371 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.567380 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.567390 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.572691 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.572706 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.572716 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.573792 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.573810 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.573818 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.573865 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.573900 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.573924 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.578271 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.578286 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.578297 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.578902 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.578920 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.578931 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.578946 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.578961 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.578978 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.583384 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.583405 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.583425 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.583845 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.583864 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.583873 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.583896 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.583911 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.583919 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.588431 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.588461 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.588472 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.589290 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.589324 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.589339 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.589355 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.589363 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.589371 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.594317 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.594353 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.594381 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.594600 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.594621 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.594632 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.594652 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.594675 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.594694 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\nI0507 01:05:44.594706 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.594715 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.594734 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\nI0507 01:05:44.599955 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.599982 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.599999 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.600829 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.600859 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.600887 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\nI0507 01:05:44.600924 2549 log.go:172] (0xc000aa1760) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0507 01:05:44.600954 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.600984 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n http://10.108.251.54:80/\nI0507 01:05:44.601003 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.601014 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.601024 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.605405 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.605437 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.605462 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.605818 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.605830 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.605836 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.605859 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.605877 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.605909 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.610123 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.610140 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.610150 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.610787 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.610801 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.610810 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.610875 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.610903 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.610952 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.617547 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.617571 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.617592 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.618007 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.618029 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.618044 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.618057 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.618101 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.618152 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.623111 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.623141 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.623162 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.624013 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.624039 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.624049 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.624061 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.624070 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.624082 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.628032 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.628062 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.628083 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.628508 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.628523 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.628538 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.628562 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.628591 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.628619 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.632085 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.632101 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.632110 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.632386 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.632399 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.632406 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.632435 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.632480 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.632508 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.637931 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.637989 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.638005 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.638330 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.638354 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.638391 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.638410 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.638428 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.638445 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.642625 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.642649 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.642668 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.643583 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.643603 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.643611 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.643632 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.643644 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.643655 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.647312 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.647332 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.647350 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.647840 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.647867 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.647897 2549 log.go:172] (0xc00069a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.251.54:80/\nI0507 01:05:44.647927 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.647948 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.647965 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.652136 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.652154 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.652169 2549 log.go:172] (0xc000240780) (3) Data frame sent\nI0507 01:05:44.652772 2549 log.go:172] (0xc000aa1760) Data frame received for 3\nI0507 01:05:44.652786 2549 log.go:172] (0xc000240780) (3) Data frame handling\nI0507 01:05:44.652801 2549 log.go:172] (0xc000aa1760) Data frame received for 5\nI0507 01:05:44.652822 2549 log.go:172] (0xc00069a0a0) (5) Data frame handling\nI0507 01:05:44.654778 2549 log.go:172] (0xc000aa1760) Data frame received for 1\nI0507 01:05:44.654805 2549 log.go:172] (0xc000bb80a0) (1) Data frame handling\nI0507 01:05:44.654827 2549 log.go:172] (0xc000bb80a0) (1) Data frame sent\nI0507 01:05:44.654844 2549 log.go:172] (0xc000aa1760) (0xc000bb80a0) Stream removed, broadcasting: 1\nI0507 01:05:44.654862 2549 log.go:172] (0xc000aa1760) Go away received\nI0507 01:05:44.655122 2549 log.go:172] (0xc000aa1760) (0xc000bb80a0) Stream removed, broadcasting: 1\nI0507 01:05:44.655136 2549 log.go:172] (0xc000aa1760) (0xc000240780) Stream removed, broadcasting: 3\nI0507 01:05:44.655143 2549 log.go:172] (0xc000aa1760) (0xc00069a0a0) Stream removed, broadcasting: 5\n" May 7 01:05:44.659: INFO: stdout: "\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8\naffinity-clusterip-transition-z66f8" May 7 01:05:44.660: INFO: Received response from host: May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Received response from host: affinity-clusterip-transition-z66f8 May 7 01:05:44.660: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8140, will wait for the garbage collector to delete the pods May 7 01:05:44.762: INFO: Deleting ReplicationController affinity-clusterip-transition took: 9.074412ms May 7 01:05:45.062: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 300.302246ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:05:55.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8140" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.717 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":212,"skipped":3619,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:05:55.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-386.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-386.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-386.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-386.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-386.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-386.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 01:06:02.778: INFO: DNS probes using dns-386/dns-test-5905caaf-18ee-4cf2-bd6b-25474db9ff1d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:06:03.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-386" for this suite. • [SLOW TEST:7.803 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":213,"skipped":3621,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:06:03.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3444.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3444.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 01:06:12.045: INFO: DNS probes using dns-3444/dns-test-55534516-dede-4ace-8e06-95f3d781a531 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:06:12.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3444" for this suite. • [SLOW TEST:9.930 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":214,"skipped":3630,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:06:13.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-f144b319-621b-4127-9fe2-4eab47f8e3c9 in namespace container-probe-2200 May 7 01:06:19.995: INFO: Started pod liveness-f144b319-621b-4127-9fe2-4eab47f8e3c9 in namespace container-probe-2200 STEP: checking the pod's current state and verifying that restartCount is present May 7 01:06:19.999: INFO: Initial restart count of pod liveness-f144b319-621b-4127-9fe2-4eab47f8e3c9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:10:21.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2200" for this suite. • [SLOW TEST:247.739 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":215,"skipped":3631,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:10:21.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 01:10:21.706: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a66dc95-549c-4f87-9358-9fa342510dac" in namespace "projected-4770" to be "Succeeded or Failed" May 7 01:10:21.787: INFO: Pod "downwardapi-volume-8a66dc95-549c-4f87-9358-9fa342510dac": Phase="Pending", Reason="", readiness=false. Elapsed: 81.649361ms May 7 01:10:23.792: INFO: Pod "downwardapi-volume-8a66dc95-549c-4f87-9358-9fa342510dac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086083166s May 7 01:10:25.796: INFO: Pod "downwardapi-volume-8a66dc95-549c-4f87-9358-9fa342510dac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090633517s May 7 01:10:27.801: INFO: Pod "downwardapi-volume-8a66dc95-549c-4f87-9358-9fa342510dac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095697092s STEP: Saw pod success May 7 01:10:27.801: INFO: Pod "downwardapi-volume-8a66dc95-549c-4f87-9358-9fa342510dac" satisfied condition "Succeeded or Failed" May 7 01:10:27.804: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8a66dc95-549c-4f87-9358-9fa342510dac container client-container: STEP: delete the pod May 7 01:10:27.852: INFO: Waiting for pod downwardapi-volume-8a66dc95-549c-4f87-9358-9fa342510dac to disappear May 7 01:10:27.862: INFO: Pod downwardapi-volume-8a66dc95-549c-4f87-9358-9fa342510dac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:10:27.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4770" for this suite. • [SLOW TEST:6.808 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":216,"skipped":3638,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:10:27.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0507 01:10:38.036705 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 01:10:38.036: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:10:38.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7446" for this suite. • [SLOW TEST:10.192 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":217,"skipped":3641,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:10:38.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 7 01:10:50.207: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:50.207: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:50.246968 7 log.go:172] (0xc002d90790) (0xc000b7c140) Create stream I0507 01:10:50.247002 7 log.go:172] (0xc002d90790) (0xc000b7c140) Stream added, broadcasting: 1 I0507 01:10:50.252438 7 log.go:172] (0xc002d90790) Reply frame received for 1 I0507 01:10:50.252480 7 log.go:172] (0xc002d90790) (0xc001376140) Create stream I0507 01:10:50.252492 7 log.go:172] (0xc002d90790) (0xc001376140) Stream added, broadcasting: 3 I0507 01:10:50.254064 7 log.go:172] (0xc002d90790) Reply frame received for 3 I0507 01:10:50.254139 7 log.go:172] (0xc002d90790) (0xc001d35900) Create stream I0507 01:10:50.254177 7 log.go:172] (0xc002d90790) (0xc001d35900) Stream added, broadcasting: 5 I0507 01:10:50.255647 7 log.go:172] (0xc002d90790) Reply frame received for 5 I0507 01:10:50.306980 7 log.go:172] (0xc002d90790) Data frame received for 3 I0507 01:10:50.307013 7 log.go:172] (0xc001376140) (3) Data frame handling I0507 01:10:50.307023 7 log.go:172] (0xc001376140) (3) Data frame sent I0507 01:10:50.307033 7 log.go:172] (0xc002d90790) Data frame received for 3 I0507 01:10:50.307049 7 log.go:172] (0xc001376140) (3) Data frame handling I0507 01:10:50.307063 7 log.go:172] (0xc002d90790) Data frame received for 5 I0507 01:10:50.307072 7 log.go:172] (0xc001d35900) (5) Data frame handling I0507 01:10:50.308210 7 log.go:172] (0xc002d90790) Data frame received for 1 I0507 01:10:50.308251 7 log.go:172] (0xc000b7c140) (1) Data frame handling I0507 01:10:50.308275 7 log.go:172] (0xc000b7c140) (1) Data frame sent I0507 01:10:50.308292 7 log.go:172] (0xc002d90790) (0xc000b7c140) Stream removed, broadcasting: 1 I0507 01:10:50.308310 7 log.go:172] (0xc002d90790) Go away received I0507 01:10:50.308565 7 log.go:172] (0xc002d90790) (0xc000b7c140) Stream removed, broadcasting: 1 I0507 01:10:50.308581 7 log.go:172] (0xc002d90790) (0xc001376140) Stream removed, broadcasting: 3 I0507 01:10:50.308587 7 log.go:172] (0xc002d90790) (0xc001d35900) Stream removed, broadcasting: 5 May 7 01:10:50.308: INFO: Exec stderr: "" May 7 01:10:50.308: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:50.308: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:50.340274 7 log.go:172] (0xc00066e0b0) (0xc0018ca280) Create stream I0507 01:10:50.340314 7 log.go:172] (0xc00066e0b0) (0xc0018ca280) Stream added, broadcasting: 1 I0507 01:10:50.342520 7 log.go:172] (0xc00066e0b0) Reply frame received for 1 I0507 01:10:50.342572 7 log.go:172] (0xc00066e0b0) (0xc000b7d360) Create stream I0507 01:10:50.342587 7 log.go:172] (0xc00066e0b0) (0xc000b7d360) Stream added, broadcasting: 3 I0507 01:10:50.343547 7 log.go:172] (0xc00066e0b0) Reply frame received for 3 I0507 01:10:50.343583 7 log.go:172] (0xc00066e0b0) (0xc000b7d900) Create stream I0507 01:10:50.343595 7 log.go:172] (0xc00066e0b0) (0xc000b7d900) Stream added, broadcasting: 5 I0507 01:10:50.344489 7 log.go:172] (0xc00066e0b0) Reply frame received for 5 I0507 01:10:50.426050 7 log.go:172] (0xc00066e0b0) Data frame received for 5 I0507 01:10:50.426081 7 log.go:172] (0xc00066e0b0) Data frame received for 3 I0507 01:10:50.426122 7 log.go:172] (0xc000b7d360) (3) Data frame handling I0507 01:10:50.426137 7 log.go:172] (0xc000b7d360) (3) Data frame sent I0507 01:10:50.426147 7 log.go:172] (0xc00066e0b0) Data frame received for 3 I0507 01:10:50.426156 7 log.go:172] (0xc000b7d360) (3) Data frame handling I0507 01:10:50.426179 7 log.go:172] (0xc000b7d900) (5) Data frame handling I0507 01:10:50.427167 7 log.go:172] (0xc00066e0b0) Data frame received for 1 I0507 01:10:50.427187 7 log.go:172] (0xc0018ca280) (1) Data frame handling I0507 01:10:50.427203 7 log.go:172] (0xc0018ca280) (1) Data frame sent I0507 01:10:50.427217 7 log.go:172] (0xc00066e0b0) (0xc0018ca280) Stream removed, broadcasting: 1 I0507 01:10:50.427248 7 log.go:172] (0xc00066e0b0) Go away received I0507 01:10:50.427320 7 log.go:172] (0xc00066e0b0) (0xc0018ca280) Stream removed, broadcasting: 1 I0507 01:10:50.427353 7 log.go:172] (0xc00066e0b0) (0xc000b7d360) Stream removed, broadcasting: 3 I0507 01:10:50.427369 7 log.go:172] (0xc00066e0b0) (0xc000b7d900) Stream removed, broadcasting: 5 May 7 01:10:50.427: INFO: Exec stderr: "" May 7 01:10:50.427: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:50.427: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:50.456824 7 log.go:172] (0xc000840210) (0xc0017f63c0) Create stream I0507 01:10:50.456854 7 log.go:172] (0xc000840210) (0xc0017f63c0) Stream added, broadcasting: 1 I0507 01:10:50.459252 7 log.go:172] (0xc000840210) Reply frame received for 1 I0507 01:10:50.459295 7 log.go:172] (0xc000840210) (0xc001d35b80) Create stream I0507 01:10:50.459309 7 log.go:172] (0xc000840210) (0xc001d35b80) Stream added, broadcasting: 3 I0507 01:10:50.460322 7 log.go:172] (0xc000840210) Reply frame received for 3 I0507 01:10:50.460370 7 log.go:172] (0xc000840210) (0xc001d35e00) Create stream I0507 01:10:50.460385 7 log.go:172] (0xc000840210) (0xc001d35e00) Stream added, broadcasting: 5 I0507 01:10:50.461488 7 log.go:172] (0xc000840210) Reply frame received for 5 I0507 01:10:50.530007 7 log.go:172] (0xc000840210) Data frame received for 5 I0507 01:10:50.530034 7 log.go:172] (0xc001d35e00) (5) Data frame handling I0507 01:10:50.530071 7 log.go:172] (0xc000840210) Data frame received for 3 I0507 01:10:50.530118 7 log.go:172] (0xc001d35b80) (3) Data frame handling I0507 01:10:50.530146 7 log.go:172] (0xc001d35b80) (3) Data frame sent I0507 01:10:50.530168 7 log.go:172] (0xc000840210) Data frame received for 3 I0507 01:10:50.530186 7 log.go:172] (0xc001d35b80) (3) Data frame handling I0507 01:10:50.531674 7 log.go:172] (0xc000840210) Data frame received for 1 I0507 01:10:50.531703 7 log.go:172] (0xc0017f63c0) (1) Data frame handling I0507 01:10:50.531716 7 log.go:172] (0xc0017f63c0) (1) Data frame sent I0507 01:10:50.531743 7 log.go:172] (0xc000840210) (0xc0017f63c0) Stream removed, broadcasting: 1 I0507 01:10:50.531769 7 log.go:172] (0xc000840210) Go away received I0507 01:10:50.531866 7 log.go:172] (0xc000840210) (0xc0017f63c0) Stream removed, broadcasting: 1 I0507 01:10:50.531881 7 log.go:172] (0xc000840210) (0xc001d35b80) Stream removed, broadcasting: 3 I0507 01:10:50.531890 7 log.go:172] (0xc000840210) (0xc001d35e00) Stream removed, broadcasting: 5 May 7 01:10:50.531: INFO: Exec stderr: "" May 7 01:10:50.531: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:50.531: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:50.566598 7 log.go:172] (0xc00066e6e0) (0xc0018ca640) Create stream I0507 01:10:50.566638 7 log.go:172] (0xc00066e6e0) (0xc0018ca640) Stream added, broadcasting: 1 I0507 01:10:50.568961 7 log.go:172] (0xc00066e6e0) Reply frame received for 1 I0507 01:10:50.569014 7 log.go:172] (0xc00066e6e0) (0xc000b7dae0) Create stream I0507 01:10:50.569031 7 log.go:172] (0xc00066e6e0) (0xc000b7dae0) Stream added, broadcasting: 3 I0507 01:10:50.570536 7 log.go:172] (0xc00066e6e0) Reply frame received for 3 I0507 01:10:50.570584 7 log.go:172] (0xc00066e6e0) (0xc000b7df40) Create stream I0507 01:10:50.570599 7 log.go:172] (0xc00066e6e0) (0xc000b7df40) Stream added, broadcasting: 5 I0507 01:10:50.571658 7 log.go:172] (0xc00066e6e0) Reply frame received for 5 I0507 01:10:50.648372 7 log.go:172] (0xc00066e6e0) Data frame received for 5 I0507 01:10:50.648419 7 log.go:172] (0xc00066e6e0) Data frame received for 3 I0507 01:10:50.648496 7 log.go:172] (0xc000b7dae0) (3) Data frame handling I0507 01:10:50.648523 7 log.go:172] (0xc000b7dae0) (3) Data frame sent I0507 01:10:50.648544 7 log.go:172] (0xc00066e6e0) Data frame received for 3 I0507 01:10:50.648562 7 log.go:172] (0xc000b7dae0) (3) Data frame handling I0507 01:10:50.648624 7 log.go:172] (0xc000b7df40) (5) Data frame handling I0507 01:10:50.650329 7 log.go:172] (0xc00066e6e0) Data frame received for 1 I0507 01:10:50.650359 7 log.go:172] (0xc0018ca640) (1) Data frame handling I0507 01:10:50.650383 7 log.go:172] (0xc0018ca640) (1) Data frame sent I0507 01:10:50.650407 7 log.go:172] (0xc00066e6e0) (0xc0018ca640) Stream removed, broadcasting: 1 I0507 01:10:50.650527 7 log.go:172] (0xc00066e6e0) (0xc0018ca640) Stream removed, broadcasting: 1 I0507 01:10:50.650554 7 log.go:172] (0xc00066e6e0) (0xc000b7dae0) Stream removed, broadcasting: 3 I0507 01:10:50.650574 7 log.go:172] (0xc00066e6e0) (0xc000b7df40) Stream removed, broadcasting: 5 May 7 01:10:50.650: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount I0507 01:10:50.650648 7 log.go:172] (0xc00066e6e0) Go away received May 7 01:10:50.650: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:50.650: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:50.686215 7 log.go:172] (0xc002d90dc0) (0xc000a126e0) Create stream I0507 01:10:50.686253 7 log.go:172] (0xc002d90dc0) (0xc000a126e0) Stream added, broadcasting: 1 I0507 01:10:50.688614 7 log.go:172] (0xc002d90dc0) Reply frame received for 1 I0507 01:10:50.688651 7 log.go:172] (0xc002d90dc0) (0xc0017f6640) Create stream I0507 01:10:50.688662 7 log.go:172] (0xc002d90dc0) (0xc0017f6640) Stream added, broadcasting: 3 I0507 01:10:50.689731 7 log.go:172] (0xc002d90dc0) Reply frame received for 3 I0507 01:10:50.689770 7 log.go:172] (0xc002d90dc0) (0xc000a12820) Create stream I0507 01:10:50.689782 7 log.go:172] (0xc002d90dc0) (0xc000a12820) Stream added, broadcasting: 5 I0507 01:10:50.690709 7 log.go:172] (0xc002d90dc0) Reply frame received for 5 I0507 01:10:50.757591 7 log.go:172] (0xc002d90dc0) Data frame received for 3 I0507 01:10:50.757639 7 log.go:172] (0xc0017f6640) (3) Data frame handling I0507 01:10:50.757667 7 log.go:172] (0xc0017f6640) (3) Data frame sent I0507 01:10:50.757693 7 log.go:172] (0xc002d90dc0) Data frame received for 3 I0507 01:10:50.757704 7 log.go:172] (0xc0017f6640) (3) Data frame handling I0507 01:10:50.757739 7 log.go:172] (0xc002d90dc0) Data frame received for 5 I0507 01:10:50.757762 7 log.go:172] (0xc000a12820) (5) Data frame handling I0507 01:10:50.758726 7 log.go:172] (0xc002d90dc0) Data frame received for 1 I0507 01:10:50.758750 7 log.go:172] (0xc000a126e0) (1) Data frame handling I0507 01:10:50.758769 7 log.go:172] (0xc000a126e0) (1) Data frame sent I0507 01:10:50.759173 7 log.go:172] (0xc002d90dc0) (0xc000a126e0) Stream removed, broadcasting: 1 I0507 01:10:50.759204 7 log.go:172] (0xc002d90dc0) Go away received I0507 01:10:50.759346 7 log.go:172] (0xc002d90dc0) (0xc000a126e0) Stream removed, broadcasting: 1 I0507 01:10:50.759394 7 log.go:172] (0xc002d90dc0) (0xc0017f6640) Stream removed, broadcasting: 3 I0507 01:10:50.759410 7 log.go:172] (0xc002d90dc0) (0xc000a12820) Stream removed, broadcasting: 5 May 7 01:10:50.759: INFO: Exec stderr: "" May 7 01:10:50.759: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:50.759: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:50.793910 7 log.go:172] (0xc00066ef20) (0xc0018ca960) Create stream I0507 01:10:50.793942 7 log.go:172] (0xc00066ef20) (0xc0018ca960) Stream added, broadcasting: 1 I0507 01:10:50.796632 7 log.go:172] (0xc00066ef20) Reply frame received for 1 I0507 01:10:50.796672 7 log.go:172] (0xc00066ef20) (0xc001d35ea0) Create stream I0507 01:10:50.796685 7 log.go:172] (0xc00066ef20) (0xc001d35ea0) Stream added, broadcasting: 3 I0507 01:10:50.798056 7 log.go:172] (0xc00066ef20) Reply frame received for 3 I0507 01:10:50.798097 7 log.go:172] (0xc00066ef20) (0xc000a12960) Create stream I0507 01:10:50.798111 7 log.go:172] (0xc00066ef20) (0xc000a12960) Stream added, broadcasting: 5 I0507 01:10:50.799099 7 log.go:172] (0xc00066ef20) Reply frame received for 5 I0507 01:10:50.861088 7 log.go:172] (0xc00066ef20) Data frame received for 5 I0507 01:10:50.861315 7 log.go:172] (0xc000a12960) (5) Data frame handling I0507 01:10:50.861342 7 log.go:172] (0xc00066ef20) Data frame received for 3 I0507 01:10:50.861352 7 log.go:172] (0xc001d35ea0) (3) Data frame handling I0507 01:10:50.861362 7 log.go:172] (0xc001d35ea0) (3) Data frame sent I0507 01:10:50.861373 7 log.go:172] (0xc00066ef20) Data frame received for 3 I0507 01:10:50.861387 7 log.go:172] (0xc001d35ea0) (3) Data frame handling I0507 01:10:50.862539 7 log.go:172] (0xc00066ef20) Data frame received for 1 I0507 01:10:50.862553 7 log.go:172] (0xc0018ca960) (1) Data frame handling I0507 01:10:50.862568 7 log.go:172] (0xc0018ca960) (1) Data frame sent I0507 01:10:50.862645 7 log.go:172] (0xc00066ef20) (0xc0018ca960) Stream removed, broadcasting: 1 I0507 01:10:50.862662 7 log.go:172] (0xc00066ef20) Go away received I0507 01:10:50.862755 7 log.go:172] (0xc00066ef20) (0xc0018ca960) Stream removed, broadcasting: 1 I0507 01:10:50.862785 7 log.go:172] (0xc00066ef20) (0xc001d35ea0) Stream removed, broadcasting: 3 I0507 01:10:50.862804 7 log.go:172] (0xc00066ef20) (0xc000a12960) Stream removed, broadcasting: 5 May 7 01:10:50.862: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 7 01:10:50.862: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:50.862: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:50.895652 7 log.go:172] (0xc0008409a0) (0xc0017f6820) Create stream I0507 01:10:50.895690 7 log.go:172] (0xc0008409a0) (0xc0017f6820) Stream added, broadcasting: 1 I0507 01:10:50.898575 7 log.go:172] (0xc0008409a0) Reply frame received for 1 I0507 01:10:50.898624 7 log.go:172] (0xc0008409a0) (0xc000a12e60) Create stream I0507 01:10:50.898637 7 log.go:172] (0xc0008409a0) (0xc000a12e60) Stream added, broadcasting: 3 I0507 01:10:50.899409 7 log.go:172] (0xc0008409a0) Reply frame received for 3 I0507 01:10:50.899438 7 log.go:172] (0xc0008409a0) (0xc0018caaa0) Create stream I0507 01:10:50.899448 7 log.go:172] (0xc0008409a0) (0xc0018caaa0) Stream added, broadcasting: 5 I0507 01:10:50.900200 7 log.go:172] (0xc0008409a0) Reply frame received for 5 I0507 01:10:50.949907 7 log.go:172] (0xc0008409a0) Data frame received for 3 I0507 01:10:50.949934 7 log.go:172] (0xc000a12e60) (3) Data frame handling I0507 01:10:50.949947 7 log.go:172] (0xc000a12e60) (3) Data frame sent I0507 01:10:50.949962 7 log.go:172] (0xc0008409a0) Data frame received for 3 I0507 01:10:50.949973 7 log.go:172] (0xc000a12e60) (3) Data frame handling I0507 01:10:50.950003 7 log.go:172] (0xc0008409a0) Data frame received for 5 I0507 01:10:50.950023 7 log.go:172] (0xc0018caaa0) (5) Data frame handling I0507 01:10:50.951512 7 log.go:172] (0xc0008409a0) Data frame received for 1 I0507 01:10:50.951536 7 log.go:172] (0xc0017f6820) (1) Data frame handling I0507 01:10:50.951568 7 log.go:172] (0xc0017f6820) (1) Data frame sent I0507 01:10:50.951590 7 log.go:172] (0xc0008409a0) (0xc0017f6820) Stream removed, broadcasting: 1 I0507 01:10:50.951660 7 log.go:172] (0xc0008409a0) Go away received I0507 01:10:50.951693 7 log.go:172] (0xc0008409a0) (0xc0017f6820) Stream removed, broadcasting: 1 I0507 01:10:50.951708 7 log.go:172] (0xc0008409a0) (0xc000a12e60) Stream removed, broadcasting: 3 I0507 01:10:50.951747 7 log.go:172] (0xc0008409a0) (0xc0018caaa0) Stream removed, broadcasting: 5 May 7 01:10:50.951: INFO: Exec stderr: "" May 7 01:10:50.951: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:50.951: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:50.984336 7 log.go:172] (0xc000840dc0) (0xc0017f6a00) Create stream I0507 01:10:50.984376 7 log.go:172] (0xc000840dc0) (0xc0017f6a00) Stream added, broadcasting: 1 I0507 01:10:50.988664 7 log.go:172] (0xc000840dc0) Reply frame received for 1 I0507 01:10:50.988721 7 log.go:172] (0xc000840dc0) (0xc0017f6b40) Create stream I0507 01:10:50.988741 7 log.go:172] (0xc000840dc0) (0xc0017f6b40) Stream added, broadcasting: 3 I0507 01:10:50.989868 7 log.go:172] (0xc000840dc0) Reply frame received for 3 I0507 01:10:50.989914 7 log.go:172] (0xc000840dc0) (0xc0015e8140) Create stream I0507 01:10:50.989934 7 log.go:172] (0xc000840dc0) (0xc0015e8140) Stream added, broadcasting: 5 I0507 01:10:50.990759 7 log.go:172] (0xc000840dc0) Reply frame received for 5 I0507 01:10:51.060665 7 log.go:172] (0xc000840dc0) Data frame received for 3 I0507 01:10:51.060705 7 log.go:172] (0xc0017f6b40) (3) Data frame handling I0507 01:10:51.060725 7 log.go:172] (0xc0017f6b40) (3) Data frame sent I0507 01:10:51.060738 7 log.go:172] (0xc000840dc0) Data frame received for 3 I0507 01:10:51.060747 7 log.go:172] (0xc0017f6b40) (3) Data frame handling I0507 01:10:51.060766 7 log.go:172] (0xc000840dc0) Data frame received for 5 I0507 01:10:51.060785 7 log.go:172] (0xc0015e8140) (5) Data frame handling I0507 01:10:51.062648 7 log.go:172] (0xc000840dc0) Data frame received for 1 I0507 01:10:51.062670 7 log.go:172] (0xc0017f6a00) (1) Data frame handling I0507 01:10:51.062685 7 log.go:172] (0xc0017f6a00) (1) Data frame sent I0507 01:10:51.062697 7 log.go:172] (0xc000840dc0) (0xc0017f6a00) Stream removed, broadcasting: 1 I0507 01:10:51.062713 7 log.go:172] (0xc000840dc0) Go away received I0507 01:10:51.062861 7 log.go:172] (0xc000840dc0) (0xc0017f6a00) Stream removed, broadcasting: 1 I0507 01:10:51.062876 7 log.go:172] (0xc000840dc0) (0xc0017f6b40) Stream removed, broadcasting: 3 I0507 01:10:51.062882 7 log.go:172] (0xc000840dc0) (0xc0015e8140) Stream removed, broadcasting: 5 May 7 01:10:51.062: INFO: Exec stderr: "" May 7 01:10:51.062: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:51.062: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:51.086850 7 log.go:172] (0xc001fd6420) (0xc0015e8820) Create stream I0507 01:10:51.086887 7 log.go:172] (0xc001fd6420) (0xc0015e8820) Stream added, broadcasting: 1 I0507 01:10:51.089712 7 log.go:172] (0xc001fd6420) Reply frame received for 1 I0507 01:10:51.089740 7 log.go:172] (0xc001fd6420) (0xc000393ea0) Create stream I0507 01:10:51.089750 7 log.go:172] (0xc001fd6420) (0xc000393ea0) Stream added, broadcasting: 3 I0507 01:10:51.090725 7 log.go:172] (0xc001fd6420) Reply frame received for 3 I0507 01:10:51.090750 7 log.go:172] (0xc001fd6420) (0xc0017f6c80) Create stream I0507 01:10:51.090759 7 log.go:172] (0xc001fd6420) (0xc0017f6c80) Stream added, broadcasting: 5 I0507 01:10:51.091451 7 log.go:172] (0xc001fd6420) Reply frame received for 5 I0507 01:10:51.153906 7 log.go:172] (0xc001fd6420) Data frame received for 5 I0507 01:10:51.153960 7 log.go:172] (0xc0017f6c80) (5) Data frame handling I0507 01:10:51.153984 7 log.go:172] (0xc001fd6420) Data frame received for 3 I0507 01:10:51.153997 7 log.go:172] (0xc000393ea0) (3) Data frame handling I0507 01:10:51.154012 7 log.go:172] (0xc000393ea0) (3) Data frame sent I0507 01:10:51.154026 7 log.go:172] (0xc001fd6420) Data frame received for 3 I0507 01:10:51.154039 7 log.go:172] (0xc000393ea0) (3) Data frame handling I0507 01:10:51.155332 7 log.go:172] (0xc001fd6420) Data frame received for 1 I0507 01:10:51.155361 7 log.go:172] (0xc0015e8820) (1) Data frame handling I0507 01:10:51.155380 7 log.go:172] (0xc0015e8820) (1) Data frame sent I0507 01:10:51.155393 7 log.go:172] (0xc001fd6420) (0xc0015e8820) Stream removed, broadcasting: 1 I0507 01:10:51.155413 7 log.go:172] (0xc001fd6420) Go away received I0507 01:10:51.155573 7 log.go:172] (0xc001fd6420) (0xc0015e8820) Stream removed, broadcasting: 1 I0507 01:10:51.155595 7 log.go:172] (0xc001fd6420) (0xc000393ea0) Stream removed, broadcasting: 3 I0507 01:10:51.155606 7 log.go:172] (0xc001fd6420) (0xc0017f6c80) Stream removed, broadcasting: 5 May 7 01:10:51.155: INFO: Exec stderr: "" May 7 01:10:51.155: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2690 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:10:51.155: INFO: >>> kubeConfig: /root/.kube/config I0507 01:10:51.186877 7 log.go:172] (0xc002d91550) (0xc000a130e0) Create stream I0507 01:10:51.186916 7 log.go:172] (0xc002d91550) (0xc000a130e0) Stream added, broadcasting: 1 I0507 01:10:51.189666 7 log.go:172] (0xc002d91550) Reply frame received for 1 I0507 01:10:51.189721 7 log.go:172] (0xc002d91550) (0xc0017f6e60) Create stream I0507 01:10:51.189741 7 log.go:172] (0xc002d91550) (0xc0017f6e60) Stream added, broadcasting: 3 I0507 01:10:51.190570 7 log.go:172] (0xc002d91550) Reply frame received for 3 I0507 01:10:51.190617 7 log.go:172] (0xc002d91550) (0xc0014200a0) Create stream I0507 01:10:51.190638 7 log.go:172] (0xc002d91550) (0xc0014200a0) Stream added, broadcasting: 5 I0507 01:10:51.191614 7 log.go:172] (0xc002d91550) Reply frame received for 5 I0507 01:10:51.270351 7 log.go:172] (0xc002d91550) Data frame received for 5 I0507 01:10:51.270379 7 log.go:172] (0xc0014200a0) (5) Data frame handling I0507 01:10:51.270402 7 log.go:172] (0xc002d91550) Data frame received for 3 I0507 01:10:51.270412 7 log.go:172] (0xc0017f6e60) (3) Data frame handling I0507 01:10:51.270424 7 log.go:172] (0xc0017f6e60) (3) Data frame sent I0507 01:10:51.270437 7 log.go:172] (0xc002d91550) Data frame received for 3 I0507 01:10:51.270450 7 log.go:172] (0xc0017f6e60) (3) Data frame handling I0507 01:10:51.271426 7 log.go:172] (0xc002d91550) Data frame received for 1 I0507 01:10:51.271443 7 log.go:172] (0xc000a130e0) (1) Data frame handling I0507 01:10:51.271475 7 log.go:172] (0xc000a130e0) (1) Data frame sent I0507 01:10:51.271490 7 log.go:172] (0xc002d91550) (0xc000a130e0) Stream removed, broadcasting: 1 I0507 01:10:51.271525 7 log.go:172] (0xc002d91550) Go away received I0507 01:10:51.271613 7 log.go:172] (0xc002d91550) (0xc000a130e0) Stream removed, broadcasting: 1 I0507 01:10:51.271626 7 log.go:172] (0xc002d91550) (0xc0017f6e60) Stream removed, broadcasting: 3 I0507 01:10:51.271632 7 log.go:172] (0xc002d91550) (0xc0014200a0) Stream removed, broadcasting: 5 May 7 01:10:51.271: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:10:51.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2690" for this suite. • [SLOW TEST:13.216 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":218,"skipped":3653,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:10:51.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-e8a9da3c-dbd5-4b2e-829c-9fe430b7408e STEP: Creating a pod to test consume configMaps May 7 01:10:51.381: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-511cfdd0-74cf-42ee-851f-7e50aeb896cd" in namespace "projected-8828" to be "Succeeded or Failed" May 7 01:10:51.390: INFO: Pod "pod-projected-configmaps-511cfdd0-74cf-42ee-851f-7e50aeb896cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.676656ms May 7 01:10:53.472: INFO: Pod "pod-projected-configmaps-511cfdd0-74cf-42ee-851f-7e50aeb896cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091063656s May 7 01:10:55.476: INFO: Pod "pod-projected-configmaps-511cfdd0-74cf-42ee-851f-7e50aeb896cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09487749s STEP: Saw pod success May 7 01:10:55.476: INFO: Pod "pod-projected-configmaps-511cfdd0-74cf-42ee-851f-7e50aeb896cd" satisfied condition "Succeeded or Failed" May 7 01:10:55.480: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-511cfdd0-74cf-42ee-851f-7e50aeb896cd container projected-configmap-volume-test: STEP: delete the pod May 7 01:10:55.556: INFO: Waiting for pod pod-projected-configmaps-511cfdd0-74cf-42ee-851f-7e50aeb896cd to disappear May 7 01:10:55.561: INFO: Pod pod-projected-configmaps-511cfdd0-74cf-42ee-851f-7e50aeb896cd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:10:55.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8828" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":219,"skipped":3675,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:10:55.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 7 01:10:56.616: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 7 01:10:58.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410656, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410656, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410656, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410656, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 01:11:00.860: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410656, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410656, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410656, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410656, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 01:11:03.873: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:11:03.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:11:05.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7074" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.672 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":220,"skipped":3676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:11:05.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-23351cc6-1670-4852-b128-9e80892443fd STEP: Creating a pod to test consume secrets May 7 01:11:05.509: INFO: Waiting up to 5m0s for pod "pod-secrets-947f3099-f7ab-4a9a-9bef-02f72cbc93f4" in namespace "secrets-136" to be "Succeeded or Failed" May 7 01:11:05.514: INFO: Pod "pod-secrets-947f3099-f7ab-4a9a-9bef-02f72cbc93f4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.192058ms May 7 01:11:07.518: INFO: Pod "pod-secrets-947f3099-f7ab-4a9a-9bef-02f72cbc93f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008918714s May 7 01:11:09.522: INFO: Pod "pod-secrets-947f3099-f7ab-4a9a-9bef-02f72cbc93f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013029687s STEP: Saw pod success May 7 01:11:09.522: INFO: Pod "pod-secrets-947f3099-f7ab-4a9a-9bef-02f72cbc93f4" satisfied condition "Succeeded or Failed" May 7 01:11:09.524: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-947f3099-f7ab-4a9a-9bef-02f72cbc93f4 container secret-volume-test: STEP: delete the pod May 7 01:11:09.548: INFO: Waiting for pod pod-secrets-947f3099-f7ab-4a9a-9bef-02f72cbc93f4 to disappear May 7 01:11:09.565: INFO: Pod pod-secrets-947f3099-f7ab-4a9a-9bef-02f72cbc93f4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:11:09.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-136" for this suite. STEP: Destroying namespace "secret-namespace-6538" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":221,"skipped":3709,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:11:09.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 01:11:14.785: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:11:14.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7351" for this suite. • [SLOW TEST:5.273 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":222,"skipped":3727,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:11:14.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 01:11:14.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1c369d7-b003-4716-a082-bf0f02d57aa7" in namespace "projected-168" to be "Succeeded or Failed" May 7 01:11:15.004: INFO: Pod "downwardapi-volume-e1c369d7-b003-4716-a082-bf0f02d57aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 71.450908ms May 7 01:11:17.012: INFO: Pod "downwardapi-volume-e1c369d7-b003-4716-a082-bf0f02d57aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079458691s May 7 01:11:19.016: INFO: Pod "downwardapi-volume-e1c369d7-b003-4716-a082-bf0f02d57aa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083781516s STEP: Saw pod success May 7 01:11:19.016: INFO: Pod "downwardapi-volume-e1c369d7-b003-4716-a082-bf0f02d57aa7" satisfied condition "Succeeded or Failed" May 7 01:11:19.020: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e1c369d7-b003-4716-a082-bf0f02d57aa7 container client-container: STEP: delete the pod May 7 01:11:19.116: INFO: Waiting for pod downwardapi-volume-e1c369d7-b003-4716-a082-bf0f02d57aa7 to disappear May 7 01:11:19.143: INFO: Pod downwardapi-volume-e1c369d7-b003-4716-a082-bf0f02d57aa7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:11:19.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-168" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":223,"skipped":3728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:11:19.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:11:19.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8442' May 7 01:11:19.814: INFO: stderr: "" May 7 01:11:19.814: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 7 01:11:19.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8442' May 7 01:11:20.173: INFO: stderr: "" May 7 01:11:20.173: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 7 01:11:21.178: INFO: Selector matched 1 pods for map[app:agnhost] May 7 01:11:21.178: INFO: Found 0 / 1 May 7 01:11:22.177: INFO: Selector matched 1 pods for map[app:agnhost] May 7 01:11:22.177: INFO: Found 0 / 1 May 7 01:11:23.178: INFO: Selector matched 1 pods for map[app:agnhost] May 7 01:11:23.178: INFO: Found 1 / 1 May 7 01:11:23.178: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 7 01:11:23.180: INFO: Selector matched 1 pods for map[app:agnhost] May 7 01:11:23.181: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 7 01:11:23.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-n59l2 --namespace=kubectl-8442' May 7 01:11:23.337: INFO: stderr: "" May 7 01:11:23.337: INFO: stdout: "Name: agnhost-master-n59l2\nNamespace: kubectl-8442\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Thu, 07 May 2020 01:11:19 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.156\nIPs:\n IP: 10.244.1.156\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://c3f1b851e31c64f72d89e5f740aa6c2012df0d291ba248cd34c552e43e8e0b8f\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 07 May 2020 01:11:22 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-t4jvz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-t4jvz:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-t4jvz\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-8442/agnhost-master-n59l2 to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" May 7 01:11:23.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8442' May 7 01:11:23.466: INFO: stderr: "" May 7 01:11:23.466: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8442\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-n59l2\n" May 7 01:11:23.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8442' May 7 01:11:23.577: INFO: stderr: "" May 7 01:11:23.577: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8442\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.98.94.71\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.156:6379\nSession Affinity: None\nEvents: \n" May 7 01:11:23.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 7 01:11:23.703: INFO: stderr: "" May 7 01:11:23.703: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 07 May 2020 01:11:21 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 07 May 2020 01:10:01 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 07 May 2020 01:10:01 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 07 May 2020 01:10:01 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 07 May 2020 01:10:01 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d15h\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d15h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d15h\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 7d15h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 7d15h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 7d15h\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d15h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 7d15h\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d15h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 7 01:11:23.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-8442' May 7 01:11:23.819: INFO: stderr: "" May 7 01:11:23.819: INFO: stdout: "Name: kubectl-8442\nLabels: e2e-framework=kubectl\n e2e-run=a2dd0c1c-d924-4644-9fa8-07db2b7bfd4f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:11:23.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8442" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":224,"skipped":3783,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:11:23.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:11:23.956: INFO: Create a RollingUpdate DaemonSet May 7 01:11:23.959: INFO: Check that daemon pods launch on every node of the cluster May 7 01:11:23.986: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:24.003: INFO: Number of nodes with available pods: 0 May 7 01:11:24.003: INFO: Node latest-worker is running more than one daemon pod May 7 01:11:25.007: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:25.009: INFO: Number of nodes with available pods: 0 May 7 01:11:25.009: INFO: Node latest-worker is running more than one daemon pod May 7 01:11:26.008: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:26.011: INFO: Number of nodes with available pods: 0 May 7 01:11:26.011: INFO: Node latest-worker is running more than one daemon pod May 7 01:11:27.040: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:27.044: INFO: Number of nodes with available pods: 0 May 7 01:11:27.044: INFO: Node latest-worker is running more than one daemon pod May 7 01:11:28.136: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:28.214: INFO: Number of nodes with available pods: 1 May 7 01:11:28.214: INFO: Node latest-worker2 is running more than one daemon pod May 7 01:11:29.083: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:29.086: INFO: Number of nodes with available pods: 1 May 7 01:11:29.086: INFO: Node latest-worker2 is running more than one daemon pod May 7 01:11:30.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:30.024: INFO: Number of nodes with available pods: 2 May 7 01:11:30.024: INFO: Number of running nodes: 2, number of available pods: 2 May 7 01:11:30.024: INFO: Update the DaemonSet to trigger a rollout May 7 01:11:30.189: INFO: Updating DaemonSet daemon-set May 7 01:11:45.273: INFO: Roll back the DaemonSet before rollout is complete May 7 01:11:45.280: INFO: Updating DaemonSet daemon-set May 7 01:11:45.280: INFO: Make sure DaemonSet rollback is complete May 7 01:11:45.300: INFO: Wrong image for pod: daemon-set-nhxgp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 7 01:11:45.300: INFO: Pod daemon-set-nhxgp is not available May 7 01:11:45.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:46.325: INFO: Wrong image for pod: daemon-set-nhxgp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 7 01:11:46.325: INFO: Pod daemon-set-nhxgp is not available May 7 01:11:46.328: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:47.325: INFO: Wrong image for pod: daemon-set-nhxgp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 7 01:11:47.325: INFO: Pod daemon-set-nhxgp is not available May 7 01:11:47.328: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 01:11:48.358: INFO: Pod daemon-set-5sfzz is not available May 7 01:11:48.361: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3757, will wait for the garbage collector to delete the pods May 7 01:11:48.428: INFO: Deleting DaemonSet.extensions daemon-set took: 7.88151ms May 7 01:11:48.728: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.239123ms May 7 01:11:55.332: INFO: Number of nodes with available pods: 0 May 7 01:11:55.332: INFO: Number of running nodes: 0, number of available pods: 0 May 7 01:11:55.335: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3757/daemonsets","resourceVersion":"2181925"},"items":null} May 7 01:11:55.338: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3757/pods","resourceVersion":"2181925"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:11:55.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3757" for this suite. • [SLOW TEST:31.530 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":225,"skipped":3788,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:11:55.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5756 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 01:11:55.473: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 7 01:11:55.526: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 01:11:57.687: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 01:11:59.530: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 01:12:01.529: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:12:03.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:12:05.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:12:07.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:12:09.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:12:11.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:12:13.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:12:15.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:12:17.531: INFO: The status of Pod netserver-0 is Running (Ready = true) May 7 01:12:17.536: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 7 01:12:25.595: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.160 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5756 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:12:25.595: INFO: >>> kubeConfig: /root/.kube/config I0507 01:12:25.622691 7 log.go:172] (0xc002d90c60) (0xc001ae30e0) Create stream I0507 01:12:25.622719 7 log.go:172] (0xc002d90c60) (0xc001ae30e0) Stream added, broadcasting: 1 I0507 01:12:25.624422 7 log.go:172] (0xc002d90c60) Reply frame received for 1 I0507 01:12:25.624461 7 log.go:172] (0xc002d90c60) (0xc001275680) Create stream I0507 01:12:25.624484 7 log.go:172] (0xc002d90c60) (0xc001275680) Stream added, broadcasting: 3 I0507 01:12:25.625591 7 log.go:172] (0xc002d90c60) Reply frame received for 3 I0507 01:12:25.625643 7 log.go:172] (0xc002d90c60) (0xc00190a000) Create stream I0507 01:12:25.625667 7 log.go:172] (0xc002d90c60) (0xc00190a000) Stream added, broadcasting: 5 I0507 01:12:25.626580 7 log.go:172] (0xc002d90c60) Reply frame received for 5 I0507 01:12:26.680971 7 log.go:172] (0xc002d90c60) Data frame received for 3 I0507 01:12:26.681052 7 log.go:172] (0xc001275680) (3) Data frame handling I0507 01:12:26.681082 7 log.go:172] (0xc001275680) (3) Data frame sent I0507 01:12:26.681102 7 log.go:172] (0xc002d90c60) Data frame received for 5 I0507 01:12:26.681284 7 log.go:172] (0xc00190a000) (5) Data frame handling I0507 01:12:26.681314 7 log.go:172] (0xc002d90c60) Data frame received for 3 I0507 01:12:26.681325 7 log.go:172] (0xc001275680) (3) Data frame handling I0507 01:12:26.682733 7 log.go:172] (0xc002d90c60) Data frame received for 1 I0507 01:12:26.682769 7 log.go:172] (0xc001ae30e0) (1) Data frame handling I0507 01:12:26.682810 7 log.go:172] (0xc001ae30e0) (1) Data frame sent I0507 01:12:26.682945 7 log.go:172] (0xc002d90c60) (0xc001ae30e0) Stream removed, broadcasting: 1 I0507 01:12:26.683002 7 log.go:172] (0xc002d90c60) Go away received I0507 01:12:26.683054 7 log.go:172] (0xc002d90c60) (0xc001ae30e0) Stream removed, broadcasting: 1 I0507 01:12:26.683078 7 log.go:172] (0xc002d90c60) (0xc001275680) Stream removed, broadcasting: 3 I0507 01:12:26.683092 7 log.go:172] (0xc002d90c60) (0xc00190a000) Stream removed, broadcasting: 5 May 7 01:12:26.683: INFO: Found all expected endpoints: [netserver-0] May 7 01:12:26.685: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.251 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5756 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:12:26.685: INFO: >>> kubeConfig: /root/.kube/config I0507 01:12:26.712110 7 log.go:172] (0xc001b26420) (0xc00190a8c0) Create stream I0507 01:12:26.712130 7 log.go:172] (0xc001b26420) (0xc00190a8c0) Stream added, broadcasting: 1 I0507 01:12:26.713710 7 log.go:172] (0xc001b26420) Reply frame received for 1 I0507 01:12:26.713730 7 log.go:172] (0xc001b26420) (0xc001a19d60) Create stream I0507 01:12:26.713738 7 log.go:172] (0xc001b26420) (0xc001a19d60) Stream added, broadcasting: 3 I0507 01:12:26.714321 7 log.go:172] (0xc001b26420) Reply frame received for 3 I0507 01:12:26.714343 7 log.go:172] (0xc001b26420) (0xc001275860) Create stream I0507 01:12:26.714352 7 log.go:172] (0xc001b26420) (0xc001275860) Stream added, broadcasting: 5 I0507 01:12:26.714838 7 log.go:172] (0xc001b26420) Reply frame received for 5 I0507 01:12:27.762667 7 log.go:172] (0xc001b26420) Data frame received for 3 I0507 01:12:27.762697 7 log.go:172] (0xc001a19d60) (3) Data frame handling I0507 01:12:27.762728 7 log.go:172] (0xc001a19d60) (3) Data frame sent I0507 01:12:27.762739 7 log.go:172] (0xc001b26420) Data frame received for 3 I0507 01:12:27.762747 7 log.go:172] (0xc001a19d60) (3) Data frame handling I0507 01:12:27.762950 7 log.go:172] (0xc001b26420) Data frame received for 5 I0507 01:12:27.762965 7 log.go:172] (0xc001275860) (5) Data frame handling I0507 01:12:27.769949 7 log.go:172] (0xc001b26420) Data frame received for 1 I0507 01:12:27.769975 7 log.go:172] (0xc00190a8c0) (1) Data frame handling I0507 01:12:27.769997 7 log.go:172] (0xc00190a8c0) (1) Data frame sent I0507 01:12:27.770013 7 log.go:172] (0xc001b26420) (0xc00190a8c0) Stream removed, broadcasting: 1 I0507 01:12:27.770030 7 log.go:172] (0xc001b26420) Go away received I0507 01:12:27.770108 7 log.go:172] (0xc001b26420) (0xc00190a8c0) Stream removed, broadcasting: 1 I0507 01:12:27.770123 7 log.go:172] (0xc001b26420) (0xc001a19d60) Stream removed, broadcasting: 3 I0507 01:12:27.770131 7 log.go:172] (0xc001b26420) (0xc001275860) Stream removed, broadcasting: 5 May 7 01:12:27.770: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:12:27.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5756" for this suite. • [SLOW TEST:32.452 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":226,"skipped":3792,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:12:27.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:12:35.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6680" for this suite. STEP: Destroying namespace "nsdeletetest-3309" for this suite. May 7 01:12:36.322: INFO: Namespace nsdeletetest-3309 was already deleted STEP: Destroying namespace "nsdeletetest-725" for this suite. • [SLOW TEST:8.517 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":227,"skipped":3799,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:12:36.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:12:47.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5628" for this suite. • [SLOW TEST:11.206 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":228,"skipped":3805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:12:47.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3001 STEP: creating service affinity-clusterip in namespace services-3001 STEP: creating replication controller affinity-clusterip in namespace services-3001 I0507 01:12:47.712298 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-3001, replica count: 3 I0507 01:12:50.762806 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 01:12:53.763098 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 01:12:53.769: INFO: Creating new exec pod May 7 01:13:00.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3001 execpod-affinitygpq2m -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 7 01:13:01.232: INFO: stderr: "I0507 01:13:01.124979 2719 log.go:172] (0xc00055ad10) (0xc000c10500) Create stream\nI0507 01:13:01.125028 2719 log.go:172] (0xc00055ad10) (0xc000c10500) Stream added, broadcasting: 1\nI0507 01:13:01.130114 2719 log.go:172] (0xc00055ad10) Reply frame received for 1\nI0507 01:13:01.130160 2719 log.go:172] (0xc00055ad10) (0xc00052e320) Create stream\nI0507 01:13:01.130171 2719 log.go:172] (0xc00055ad10) (0xc00052e320) Stream added, broadcasting: 3\nI0507 01:13:01.131229 2719 log.go:172] (0xc00055ad10) Reply frame received for 3\nI0507 01:13:01.131281 2719 log.go:172] (0xc00055ad10) (0xc00044ce60) Create stream\nI0507 01:13:01.131295 2719 log.go:172] (0xc00055ad10) (0xc00044ce60) Stream added, broadcasting: 5\nI0507 01:13:01.132101 2719 log.go:172] (0xc00055ad10) Reply frame received for 5\nI0507 01:13:01.210328 2719 log.go:172] (0xc00055ad10) Data frame received for 5\nI0507 01:13:01.210378 2719 log.go:172] (0xc00044ce60) (5) Data frame handling\nI0507 01:13:01.210407 2719 log.go:172] (0xc00044ce60) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0507 01:13:01.211076 2719 log.go:172] (0xc00055ad10) Data frame received for 3\nI0507 01:13:01.211109 2719 log.go:172] (0xc00052e320) (3) Data frame handling\nI0507 01:13:01.211162 2719 log.go:172] (0xc00055ad10) Data frame received for 5\nI0507 01:13:01.211184 2719 log.go:172] (0xc00044ce60) (5) Data frame handling\nI0507 01:13:01.211201 2719 log.go:172] (0xc00044ce60) (5) Data frame sent\nI0507 01:13:01.211222 2719 log.go:172] (0xc00055ad10) Data frame received for 5\nI0507 01:13:01.211250 2719 log.go:172] (0xc00044ce60) (5) Data frame handling\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0507 01:13:01.227537 2719 log.go:172] (0xc00055ad10) Data frame received for 1\nI0507 01:13:01.227573 2719 log.go:172] (0xc000c10500) (1) Data frame handling\nI0507 01:13:01.227583 2719 log.go:172] (0xc000c10500) (1) Data frame sent\nI0507 01:13:01.227595 2719 log.go:172] (0xc00055ad10) (0xc000c10500) Stream removed, broadcasting: 1\nI0507 01:13:01.227607 2719 log.go:172] (0xc00055ad10) Go away received\nI0507 01:13:01.228034 2719 log.go:172] (0xc00055ad10) (0xc000c10500) Stream removed, broadcasting: 1\nI0507 01:13:01.228105 2719 log.go:172] (0xc00055ad10) (0xc00052e320) Stream removed, broadcasting: 3\nI0507 01:13:01.228146 2719 log.go:172] (0xc00055ad10) (0xc00044ce60) Stream removed, broadcasting: 5\n" May 7 01:13:01.232: INFO: stdout: "" May 7 01:13:01.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3001 execpod-affinitygpq2m -- /bin/sh -x -c nc -zv -t -w 2 10.103.2.54 80' May 7 01:13:01.470: INFO: stderr: "I0507 01:13:01.393661 2742 log.go:172] (0xc000a7b290) (0xc000845360) Create stream\nI0507 01:13:01.393713 2742 log.go:172] (0xc000a7b290) (0xc000845360) Stream added, broadcasting: 1\nI0507 01:13:01.398416 2742 log.go:172] (0xc000a7b290) Reply frame received for 1\nI0507 01:13:01.398453 2742 log.go:172] (0xc000a7b290) (0xc00079c3c0) Create stream\nI0507 01:13:01.398464 2742 log.go:172] (0xc000a7b290) (0xc00079c3c0) Stream added, broadcasting: 3\nI0507 01:13:01.399511 2742 log.go:172] (0xc000a7b290) Reply frame received for 3\nI0507 01:13:01.399557 2742 log.go:172] (0xc000a7b290) (0xc000570000) Create stream\nI0507 01:13:01.399569 2742 log.go:172] (0xc000a7b290) (0xc000570000) Stream added, broadcasting: 5\nI0507 01:13:01.400701 2742 log.go:172] (0xc000a7b290) Reply frame received for 5\nI0507 01:13:01.464635 2742 log.go:172] (0xc000a7b290) Data frame received for 5\nI0507 01:13:01.464666 2742 log.go:172] (0xc000570000) (5) Data frame handling\nI0507 01:13:01.464677 2742 log.go:172] (0xc000570000) (5) Data frame sent\nI0507 01:13:01.464685 2742 log.go:172] (0xc000a7b290) Data frame received for 5\nI0507 01:13:01.464694 2742 log.go:172] (0xc000570000) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.2.54 80\nConnection to 10.103.2.54 80 port [tcp/http] succeeded!\nI0507 01:13:01.464718 2742 log.go:172] (0xc000a7b290) Data frame received for 3\nI0507 01:13:01.464740 2742 log.go:172] (0xc00079c3c0) (3) Data frame handling\nI0507 01:13:01.466134 2742 log.go:172] (0xc000a7b290) Data frame received for 1\nI0507 01:13:01.466152 2742 log.go:172] (0xc000845360) (1) Data frame handling\nI0507 01:13:01.466168 2742 log.go:172] (0xc000845360) (1) Data frame sent\nI0507 01:13:01.466191 2742 log.go:172] (0xc000a7b290) (0xc000845360) Stream removed, broadcasting: 1\nI0507 01:13:01.466211 2742 log.go:172] (0xc000a7b290) Go away received\nI0507 01:13:01.466545 2742 log.go:172] (0xc000a7b290) (0xc000845360) Stream removed, broadcasting: 1\nI0507 01:13:01.466572 2742 log.go:172] (0xc000a7b290) (0xc00079c3c0) Stream removed, broadcasting: 3\nI0507 01:13:01.466585 2742 log.go:172] (0xc000a7b290) (0xc000570000) Stream removed, broadcasting: 5\n" May 7 01:13:01.470: INFO: stdout: "" May 7 01:13:01.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3001 execpod-affinitygpq2m -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.103.2.54:80/ ; done' May 7 01:13:01.770: INFO: stderr: "I0507 01:13:01.596612 2762 log.go:172] (0xc00096b6b0) (0xc000b08500) Create stream\nI0507 01:13:01.596671 2762 log.go:172] (0xc00096b6b0) (0xc000b08500) Stream added, broadcasting: 1\nI0507 01:13:01.608838 2762 log.go:172] (0xc00096b6b0) Reply frame received for 1\nI0507 01:13:01.608887 2762 log.go:172] (0xc00096b6b0) (0xc00084cdc0) Create stream\nI0507 01:13:01.608912 2762 log.go:172] (0xc00096b6b0) (0xc00084cdc0) Stream added, broadcasting: 3\nI0507 01:13:01.609737 2762 log.go:172] (0xc00096b6b0) Reply frame received for 3\nI0507 01:13:01.609776 2762 log.go:172] (0xc00096b6b0) (0xc00051cc80) Create stream\nI0507 01:13:01.609787 2762 log.go:172] (0xc00096b6b0) (0xc00051cc80) Stream added, broadcasting: 5\nI0507 01:13:01.610433 2762 log.go:172] (0xc00096b6b0) Reply frame received for 5\nI0507 01:13:01.685711 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.685767 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.685795 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.685851 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.685878 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.685897 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.692612 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.692640 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.692660 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.693537 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.693623 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.693648 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.693670 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.693714 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.693757 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.696805 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.696891 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.696948 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.697070 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.697101 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.697335 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\nI0507 01:13:01.697373 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.697393 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.697428 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\nI0507 01:13:01.697445 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.697464 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.697476 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.704441 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.704469 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.704481 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.704501 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.704517 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.704527 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.710382 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.710401 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.710415 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.711146 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.711165 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.711177 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.711194 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.711203 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.711211 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\nI0507 01:13:01.711231 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.711240 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.711252 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\nI0507 01:13:01.714590 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.714609 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.714623 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.714900 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.714915 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.714921 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.714930 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.714941 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.714956 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.718988 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.719006 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.719017 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.719542 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.719565 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.719572 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.719581 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.719586 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.719591 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.722966 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.722986 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.722998 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.723228 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.723248 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.723266 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.723305 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.723322 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.723335 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.727154 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.727168 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.727179 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.727627 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.727638 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.727653 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.727681 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.727702 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\nI0507 01:13:01.727713 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.731409 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.731430 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.731446 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.732016 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.732045 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.732053 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.732062 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.732067 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.732071 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.735381 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.735400 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.735422 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.735824 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.735850 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.735861 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.735871 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.735880 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.735896 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.738833 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.738861 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.738885 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.739282 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.739303 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.739322 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.739399 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.739412 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.739423 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.743139 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.743149 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.743154 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.743662 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.743693 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.743707 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.743732 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.743759 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.743786 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.746787 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.746806 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.746826 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.747269 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.747283 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.747300 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.747312 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.747326 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.747339 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.751406 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.751424 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.751436 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.751665 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.751685 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.751694 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.751713 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.751721 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.751729 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.755526 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.755556 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.755596 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.755861 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.755878 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.755897 2762 log.go:172] (0xc00051cc80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.2.54:80/\nI0507 01:13:01.756029 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.756047 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.756064 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.762672 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.762685 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.762691 2762 log.go:172] (0xc00084cdc0) (3) Data frame sent\nI0507 01:13:01.763544 2762 log.go:172] (0xc00096b6b0) Data frame received for 3\nI0507 01:13:01.763573 2762 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0507 01:13:01.763792 2762 log.go:172] (0xc00096b6b0) Data frame received for 5\nI0507 01:13:01.763812 2762 log.go:172] (0xc00051cc80) (5) Data frame handling\nI0507 01:13:01.765647 2762 log.go:172] (0xc00096b6b0) Data frame received for 1\nI0507 01:13:01.765665 2762 log.go:172] (0xc000b08500) (1) Data frame handling\nI0507 01:13:01.765679 2762 log.go:172] (0xc000b08500) (1) Data frame sent\nI0507 01:13:01.765692 2762 log.go:172] (0xc00096b6b0) (0xc000b08500) Stream removed, broadcasting: 1\nI0507 01:13:01.765709 2762 log.go:172] (0xc00096b6b0) Go away received\nI0507 01:13:01.766037 2762 log.go:172] (0xc00096b6b0) (0xc000b08500) Stream removed, broadcasting: 1\nI0507 01:13:01.766055 2762 log.go:172] (0xc00096b6b0) (0xc00084cdc0) Stream removed, broadcasting: 3\nI0507 01:13:01.766066 2762 log.go:172] (0xc00096b6b0) (0xc00051cc80) Stream removed, broadcasting: 5\n" May 7 01:13:01.771: INFO: stdout: "\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt\naffinity-clusterip-lg5kt" May 7 01:13:01.772: INFO: Received response from host: May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Received response from host: affinity-clusterip-lg5kt May 7 01:13:01.772: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-3001, will wait for the garbage collector to delete the pods May 7 01:13:01.958: INFO: Deleting ReplicationController affinity-clusterip took: 41.198039ms May 7 01:13:02.359: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.253163ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:13:15.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3001" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:27.523 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":229,"skipped":3833,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:13:15.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 01:13:19.613: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:13:19.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9958" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":230,"skipped":3834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:13:19.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 7 01:13:27.850: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 7 01:13:27.878: INFO: Pod pod-with-poststart-http-hook still exists May 7 01:13:29.878: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 7 01:13:29.882: INFO: Pod pod-with-poststart-http-hook still exists May 7 01:13:31.878: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 7 01:13:31.882: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:13:31.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6661" for this suite. • [SLOW TEST:12.201 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3871,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:13:31.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-1e510505-b112-43be-b484-7fd92a8da4a5 STEP: Creating a pod to test consume secrets May 7 01:13:31.966: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5d898b6f-de60-46b5-945d-9bed9abe3ab1" in namespace "projected-2762" to be "Succeeded or Failed" May 7 01:13:32.011: INFO: Pod "pod-projected-secrets-5d898b6f-de60-46b5-945d-9bed9abe3ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 44.933113ms May 7 01:13:34.015: INFO: Pod "pod-projected-secrets-5d898b6f-de60-46b5-945d-9bed9abe3ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048786804s May 7 01:13:36.019: INFO: Pod "pod-projected-secrets-5d898b6f-de60-46b5-945d-9bed9abe3ab1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053254819s STEP: Saw pod success May 7 01:13:36.019: INFO: Pod "pod-projected-secrets-5d898b6f-de60-46b5-945d-9bed9abe3ab1" satisfied condition "Succeeded or Failed" May 7 01:13:36.023: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-5d898b6f-de60-46b5-945d-9bed9abe3ab1 container projected-secret-volume-test: STEP: delete the pod May 7 01:13:36.046: INFO: Waiting for pod pod-projected-secrets-5d898b6f-de60-46b5-945d-9bed9abe3ab1 to disappear May 7 01:13:36.048: INFO: Pod pod-projected-secrets-5d898b6f-de60-46b5-945d-9bed9abe3ab1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:13:36.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2762" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3871,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:13:36.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 7 01:13:36.158: INFO: Waiting up to 5m0s for pod "pod-e424f6f0-5af8-4b3e-a851-807c769efbec" in namespace "emptydir-9103" to be "Succeeded or Failed" May 7 01:13:36.162: INFO: Pod "pod-e424f6f0-5af8-4b3e-a851-807c769efbec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.55114ms May 7 01:13:38.165: INFO: Pod "pod-e424f6f0-5af8-4b3e-a851-807c769efbec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007115452s May 7 01:13:40.170: INFO: Pod "pod-e424f6f0-5af8-4b3e-a851-807c769efbec": Phase="Running", Reason="", readiness=true. Elapsed: 4.011398881s May 7 01:13:42.175: INFO: Pod "pod-e424f6f0-5af8-4b3e-a851-807c769efbec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016307137s STEP: Saw pod success May 7 01:13:42.175: INFO: Pod "pod-e424f6f0-5af8-4b3e-a851-807c769efbec" satisfied condition "Succeeded or Failed" May 7 01:13:42.178: INFO: Trying to get logs from node latest-worker2 pod pod-e424f6f0-5af8-4b3e-a851-807c769efbec container test-container: STEP: delete the pod May 7 01:13:42.220: INFO: Waiting for pod pod-e424f6f0-5af8-4b3e-a851-807c769efbec to disappear May 7 01:13:42.241: INFO: Pod pod-e424f6f0-5af8-4b3e-a851-807c769efbec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:13:42.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9103" for this suite. • [SLOW TEST:6.149 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":233,"skipped":3880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:13:42.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 7 01:13:46.411: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-4013 PodName:var-expansion-49552a00-a084-4e2b-b4ed-b468a19b3a56 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:13:46.411: INFO: >>> kubeConfig: /root/.kube/config I0507 01:13:46.440046 7 log.go:172] (0xc001b264d0) (0xc000a12960) Create stream I0507 01:13:46.440071 7 log.go:172] (0xc001b264d0) (0xc000a12960) Stream added, broadcasting: 1 I0507 01:13:46.442266 7 log.go:172] (0xc001b264d0) Reply frame received for 1 I0507 01:13:46.442292 7 log.go:172] (0xc001b264d0) (0xc001a18780) Create stream I0507 01:13:46.442302 7 log.go:172] (0xc001b264d0) (0xc001a18780) Stream added, broadcasting: 3 I0507 01:13:46.443028 7 log.go:172] (0xc001b264d0) Reply frame received for 3 I0507 01:13:46.443065 7 log.go:172] (0xc001b264d0) (0xc000a12e60) Create stream I0507 01:13:46.443082 7 log.go:172] (0xc001b264d0) (0xc000a12e60) Stream added, broadcasting: 5 I0507 01:13:46.443866 7 log.go:172] (0xc001b264d0) Reply frame received for 5 I0507 01:13:46.519275 7 log.go:172] (0xc001b264d0) Data frame received for 3 I0507 01:13:46.519312 7 log.go:172] (0xc001a18780) (3) Data frame handling I0507 01:13:46.519346 7 log.go:172] (0xc001b264d0) Data frame received for 5 I0507 01:13:46.519392 7 log.go:172] (0xc000a12e60) (5) Data frame handling I0507 01:13:46.521248 7 log.go:172] (0xc001b264d0) Data frame received for 1 I0507 01:13:46.521271 7 log.go:172] (0xc000a12960) (1) Data frame handling I0507 01:13:46.521287 7 log.go:172] (0xc000a12960) (1) Data frame sent I0507 01:13:46.521301 7 log.go:172] (0xc001b264d0) (0xc000a12960) Stream removed, broadcasting: 1 I0507 01:13:46.521363 7 log.go:172] (0xc001b264d0) Go away received I0507 01:13:46.521386 7 log.go:172] (0xc001b264d0) (0xc000a12960) Stream removed, broadcasting: 1 I0507 01:13:46.521461 7 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc001a18780), 0x5:(*spdystream.Stream)(0xc000a12e60)} I0507 01:13:46.521504 7 log.go:172] (0xc001b264d0) (0xc001a18780) Stream removed, broadcasting: 3 I0507 01:13:46.521537 7 log.go:172] (0xc001b264d0) (0xc000a12e60) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 7 01:13:46.524: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-4013 PodName:var-expansion-49552a00-a084-4e2b-b4ed-b468a19b3a56 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:13:46.525: INFO: >>> kubeConfig: /root/.kube/config I0507 01:13:46.562549 7 log.go:172] (0xc001b26b00) (0xc000a13680) Create stream I0507 01:13:46.562583 7 log.go:172] (0xc001b26b00) (0xc000a13680) Stream added, broadcasting: 1 I0507 01:13:46.564662 7 log.go:172] (0xc001b26b00) Reply frame received for 1 I0507 01:13:46.564703 7 log.go:172] (0xc001b26b00) (0xc0015e8820) Create stream I0507 01:13:46.564718 7 log.go:172] (0xc001b26b00) (0xc0015e8820) Stream added, broadcasting: 3 I0507 01:13:46.566091 7 log.go:172] (0xc001b26b00) Reply frame received for 3 I0507 01:13:46.566138 7 log.go:172] (0xc001b26b00) (0xc0015e8a00) Create stream I0507 01:13:46.566157 7 log.go:172] (0xc001b26b00) (0xc0015e8a00) Stream added, broadcasting: 5 I0507 01:13:46.567165 7 log.go:172] (0xc001b26b00) Reply frame received for 5 I0507 01:13:46.624268 7 log.go:172] (0xc001b26b00) Data frame received for 3 I0507 01:13:46.624313 7 log.go:172] (0xc0015e8820) (3) Data frame handling I0507 01:13:46.624334 7 log.go:172] (0xc001b26b00) Data frame received for 5 I0507 01:13:46.624344 7 log.go:172] (0xc0015e8a00) (5) Data frame handling I0507 01:13:46.625709 7 log.go:172] (0xc001b26b00) Data frame received for 1 I0507 01:13:46.625726 7 log.go:172] (0xc000a13680) (1) Data frame handling I0507 01:13:46.625744 7 log.go:172] (0xc000a13680) (1) Data frame sent I0507 01:13:46.625765 7 log.go:172] (0xc001b26b00) (0xc000a13680) Stream removed, broadcasting: 1 I0507 01:13:46.625780 7 log.go:172] (0xc001b26b00) Go away received I0507 01:13:46.625860 7 log.go:172] (0xc001b26b00) (0xc000a13680) Stream removed, broadcasting: 1 I0507 01:13:46.625892 7 log.go:172] (0xc001b26b00) (0xc0015e8820) Stream removed, broadcasting: 3 I0507 01:13:46.625901 7 log.go:172] (0xc001b26b00) (0xc0015e8a00) Stream removed, broadcasting: 5 STEP: updating the annotation value May 7 01:13:47.136: INFO: Successfully updated pod "var-expansion-49552a00-a084-4e2b-b4ed-b468a19b3a56" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 7 01:13:47.158: INFO: Deleting pod "var-expansion-49552a00-a084-4e2b-b4ed-b468a19b3a56" in namespace "var-expansion-4013" May 7 01:13:47.162: INFO: Wait up to 5m0s for pod "var-expansion-49552a00-a084-4e2b-b4ed-b468a19b3a56" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:14:23.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4013" for this suite. • [SLOW TEST:40.937 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":234,"skipped":3927,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:14:23.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:14:55.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4928" for this suite. • [SLOW TEST:32.644 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":235,"skipped":3948,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:14:55.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 7 01:14:56.377: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 7 01:14:58.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410896, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410896, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410896, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724410896, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 01:15:01.445: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:15:01.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:15:02.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3042" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.991 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":236,"skipped":3971,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:15:02.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-3965677b-6ee7-494d-ab1a-9734208aa77f in namespace container-probe-5647 May 7 01:15:06.940: INFO: Started pod busybox-3965677b-6ee7-494d-ab1a-9734208aa77f in namespace container-probe-5647 STEP: checking the pod's current state and verifying that restartCount is present May 7 01:15:06.943: INFO: Initial restart count of pod busybox-3965677b-6ee7-494d-ab1a-9734208aa77f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:19:07.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5647" for this suite. • [SLOW TEST:244.869 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3973,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:19:07.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 7 01:19:07.745: INFO: Waiting up to 5m0s for pod "downward-api-15dbb046-aa8d-42d9-a549-c14c592c358b" in namespace "downward-api-1349" to be "Succeeded or Failed" May 7 01:19:07.749: INFO: Pod "downward-api-15dbb046-aa8d-42d9-a549-c14c592c358b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.79608ms May 7 01:19:09.753: INFO: Pod "downward-api-15dbb046-aa8d-42d9-a549-c14c592c358b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008126751s May 7 01:19:11.758: INFO: Pod "downward-api-15dbb046-aa8d-42d9-a549-c14c592c358b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012766058s STEP: Saw pod success May 7 01:19:11.758: INFO: Pod "downward-api-15dbb046-aa8d-42d9-a549-c14c592c358b" satisfied condition "Succeeded or Failed" May 7 01:19:11.761: INFO: Trying to get logs from node latest-worker pod downward-api-15dbb046-aa8d-42d9-a549-c14c592c358b container dapi-container: STEP: delete the pod May 7 01:19:11.854: INFO: Waiting for pod downward-api-15dbb046-aa8d-42d9-a549-c14c592c358b to disappear May 7 01:19:11.863: INFO: Pod downward-api-15dbb046-aa8d-42d9-a549-c14c592c358b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:19:11.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1349" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":238,"skipped":3982,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:19:11.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 7 01:19:11.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-868f6732-96c7-4475-b132-728f68699e76" in namespace "projected-2546" to be "Succeeded or Failed" May 7 01:19:12.019: INFO: Pod "downwardapi-volume-868f6732-96c7-4475-b132-728f68699e76": Phase="Pending", Reason="", readiness=false. Elapsed: 21.006638ms May 7 01:19:14.024: INFO: Pod "downwardapi-volume-868f6732-96c7-4475-b132-728f68699e76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025917367s May 7 01:19:16.028: INFO: Pod "downwardapi-volume-868f6732-96c7-4475-b132-728f68699e76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030040283s STEP: Saw pod success May 7 01:19:16.028: INFO: Pod "downwardapi-volume-868f6732-96c7-4475-b132-728f68699e76" satisfied condition "Succeeded or Failed" May 7 01:19:16.031: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-868f6732-96c7-4475-b132-728f68699e76 container client-container: STEP: delete the pod May 7 01:19:16.079: INFO: Waiting for pod downwardapi-volume-868f6732-96c7-4475-b132-728f68699e76 to disappear May 7 01:19:16.090: INFO: Pod downwardapi-volume-868f6732-96c7-4475-b132-728f68699e76 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:19:16.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2546" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":3982,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:19:16.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-91d96d7f-1afa-4b4d-b47b-09d3cf610ee6 STEP: Creating a pod to test consume secrets May 7 01:19:16.212: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-489a814c-f9c1-4031-b2e7-a239b0c441fd" in namespace "projected-5294" to be "Succeeded or Failed" May 7 01:19:16.244: INFO: Pod "pod-projected-secrets-489a814c-f9c1-4031-b2e7-a239b0c441fd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.321895ms May 7 01:19:18.250: INFO: Pod "pod-projected-secrets-489a814c-f9c1-4031-b2e7-a239b0c441fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037695533s May 7 01:19:20.254: INFO: Pod "pod-projected-secrets-489a814c-f9c1-4031-b2e7-a239b0c441fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042001104s STEP: Saw pod success May 7 01:19:20.254: INFO: Pod "pod-projected-secrets-489a814c-f9c1-4031-b2e7-a239b0c441fd" satisfied condition "Succeeded or Failed" May 7 01:19:20.257: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-489a814c-f9c1-4031-b2e7-a239b0c441fd container projected-secret-volume-test: STEP: delete the pod May 7 01:19:20.279: INFO: Waiting for pod pod-projected-secrets-489a814c-f9c1-4031-b2e7-a239b0c441fd to disappear May 7 01:19:20.319: INFO: Pod pod-projected-secrets-489a814c-f9c1-4031-b2e7-a239b0c441fd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:19:20.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5294" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":3983,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:19:20.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 7 01:19:20.422: INFO: Created pod &Pod{ObjectMeta:{dns-1802 dns-1802 /api/v1/namespaces/dns-1802/pods/dns-1802 4faeba27-cae6-4a3f-b338-c125428c6678 2183836 0 2020-05-07 01:19:20 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-07 01:19:20 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gwsw7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gwsw7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gwsw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 7 01:19:20.426: INFO: The status of Pod dns-1802 is Pending, waiting for it to be Running (with Ready = true) May 7 01:19:22.430: INFO: The status of Pod dns-1802 is Pending, waiting for it to be Running (with Ready = true) May 7 01:19:24.430: INFO: The status of Pod dns-1802 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 7 01:19:24.431: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1802 PodName:dns-1802 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:19:24.431: INFO: >>> kubeConfig: /root/.kube/config I0507 01:19:24.474432 7 log.go:172] (0xc000840370) (0xc001d34c80) Create stream I0507 01:19:24.474468 7 log.go:172] (0xc000840370) (0xc001d34c80) Stream added, broadcasting: 1 I0507 01:19:24.476260 7 log.go:172] (0xc000840370) Reply frame received for 1 I0507 01:19:24.476296 7 log.go:172] (0xc000840370) (0xc0019a50e0) Create stream I0507 01:19:24.476303 7 log.go:172] (0xc000840370) (0xc0019a50e0) Stream added, broadcasting: 3 I0507 01:19:24.477454 7 log.go:172] (0xc000840370) Reply frame received for 3 I0507 01:19:24.477486 7 log.go:172] (0xc000840370) (0xc0012d50e0) Create stream I0507 01:19:24.477497 7 log.go:172] (0xc000840370) (0xc0012d50e0) Stream added, broadcasting: 5 I0507 01:19:24.478569 7 log.go:172] (0xc000840370) Reply frame received for 5 I0507 01:19:24.564365 7 log.go:172] (0xc000840370) Data frame received for 3 I0507 01:19:24.564399 7 log.go:172] (0xc0019a50e0) (3) Data frame handling I0507 01:19:24.564422 7 log.go:172] (0xc0019a50e0) (3) Data frame sent I0507 01:19:24.565959 7 log.go:172] (0xc000840370) Data frame received for 5 I0507 01:19:24.565984 7 log.go:172] (0xc0012d50e0) (5) Data frame handling I0507 01:19:24.567204 7 log.go:172] (0xc000840370) Data frame received for 3 I0507 01:19:24.567224 7 log.go:172] (0xc0019a50e0) (3) Data frame handling I0507 01:19:24.568166 7 log.go:172] (0xc000840370) Data frame received for 1 I0507 01:19:24.568200 7 log.go:172] (0xc001d34c80) (1) Data frame handling I0507 01:19:24.568228 7 log.go:172] (0xc001d34c80) (1) Data frame sent I0507 01:19:24.568259 7 log.go:172] (0xc000840370) (0xc001d34c80) Stream removed, broadcasting: 1 I0507 01:19:24.568299 7 log.go:172] (0xc000840370) Go away received I0507 01:19:24.568403 7 log.go:172] (0xc000840370) (0xc001d34c80) Stream removed, broadcasting: 1 I0507 01:19:24.568434 7 log.go:172] (0xc000840370) (0xc0019a50e0) Stream removed, broadcasting: 3 I0507 01:19:24.568447 7 log.go:172] (0xc000840370) (0xc0012d50e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 7 01:19:24.568: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1802 PodName:dns-1802 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:19:24.568: INFO: >>> kubeConfig: /root/.kube/config I0507 01:19:24.610648 7 log.go:172] (0xc001b26210) (0xc000d37220) Create stream I0507 01:19:24.610676 7 log.go:172] (0xc001b26210) (0xc000d37220) Stream added, broadcasting: 1 I0507 01:19:24.612367 7 log.go:172] (0xc001b26210) Reply frame received for 1 I0507 01:19:24.612410 7 log.go:172] (0xc001b26210) (0xc000d372c0) Create stream I0507 01:19:24.612421 7 log.go:172] (0xc001b26210) (0xc000d372c0) Stream added, broadcasting: 3 I0507 01:19:24.613564 7 log.go:172] (0xc001b26210) Reply frame received for 3 I0507 01:19:24.613591 7 log.go:172] (0xc001b26210) (0xc0012d54a0) Create stream I0507 01:19:24.613600 7 log.go:172] (0xc001b26210) (0xc0012d54a0) Stream added, broadcasting: 5 I0507 01:19:24.614473 7 log.go:172] (0xc001b26210) Reply frame received for 5 I0507 01:19:24.692471 7 log.go:172] (0xc001b26210) Data frame received for 3 I0507 01:19:24.692501 7 log.go:172] (0xc000d372c0) (3) Data frame handling I0507 01:19:24.692521 7 log.go:172] (0xc000d372c0) (3) Data frame sent I0507 01:19:24.694074 7 log.go:172] (0xc001b26210) Data frame received for 3 I0507 01:19:24.694093 7 log.go:172] (0xc000d372c0) (3) Data frame handling I0507 01:19:24.694140 7 log.go:172] (0xc001b26210) Data frame received for 5 I0507 01:19:24.694175 7 log.go:172] (0xc0012d54a0) (5) Data frame handling I0507 01:19:24.695590 7 log.go:172] (0xc001b26210) Data frame received for 1 I0507 01:19:24.695628 7 log.go:172] (0xc000d37220) (1) Data frame handling I0507 01:19:24.695653 7 log.go:172] (0xc000d37220) (1) Data frame sent I0507 01:19:24.695687 7 log.go:172] (0xc001b26210) (0xc000d37220) Stream removed, broadcasting: 1 I0507 01:19:24.695781 7 log.go:172] (0xc001b26210) Go away received I0507 01:19:24.695824 7 log.go:172] (0xc001b26210) (0xc000d37220) Stream removed, broadcasting: 1 I0507 01:19:24.695848 7 log.go:172] (0xc001b26210) (0xc000d372c0) Stream removed, broadcasting: 3 I0507 01:19:24.695861 7 log.go:172] (0xc001b26210) (0xc0012d54a0) Stream removed, broadcasting: 5 May 7 01:19:24.695: INFO: Deleting pod dns-1802... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:19:24.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1802" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":241,"skipped":3998,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:19:24.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 01:19:24.848: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 01:19:24.866: INFO: Waiting for terminating namespaces to be deleted... May 7 01:19:24.872: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 7 01:19:24.876: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 01:19:24.876: INFO: Container kindnet-cni ready: true, restart count 0 May 7 01:19:24.876: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 7 01:19:24.876: INFO: Container kube-proxy ready: true, restart count 0 May 7 01:19:24.876: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 7 01:19:24.879: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 01:19:24.879: INFO: Container kindnet-cni ready: true, restart count 0 May 7 01:19:24.879: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 7 01:19:24.879: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ee0c0e62-7c39-4ac1-9927-62f980ff20ba 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-ee0c0e62-7c39-4ac1-9927-62f980ff20ba off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ee0c0e62-7c39-4ac1-9927-62f980ff20ba [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:19:41.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7063" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.752 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":242,"skipped":4003,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:19:41.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 01:19:42.096: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 01:19:44.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411182, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411182, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411182, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411182, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 01:19:46.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411182, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411182, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411182, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411182, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 01:19:49.527: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:19:49.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6589-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:19:51.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2702" for this suite. STEP: Destroying namespace "webhook-2702-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.774 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":243,"skipped":4013,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:19:51.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 7 01:19:52.727: INFO: Pod name wrapped-volume-race-b0cb871a-f1a9-4618-9352-76fc5f16b0a7: Found 0 pods out of 5 May 7 01:19:57.736: INFO: Pod name wrapped-volume-race-b0cb871a-f1a9-4618-9352-76fc5f16b0a7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b0cb871a-f1a9-4618-9352-76fc5f16b0a7 in namespace emptydir-wrapper-6817, will wait for the garbage collector to delete the pods May 7 01:20:10.211: INFO: Deleting ReplicationController wrapped-volume-race-b0cb871a-f1a9-4618-9352-76fc5f16b0a7 took: 21.905832ms May 7 01:20:10.612: INFO: Terminating ReplicationController wrapped-volume-race-b0cb871a-f1a9-4618-9352-76fc5f16b0a7 pods took: 400.298531ms STEP: Creating RC which spawns configmap-volume pods May 7 01:20:24.975: INFO: Pod name wrapped-volume-race-cf5eb338-7861-4774-a4a2-4b91569c6a35: Found 0 pods out of 5 May 7 01:20:29.981: INFO: Pod name wrapped-volume-race-cf5eb338-7861-4774-a4a2-4b91569c6a35: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cf5eb338-7861-4774-a4a2-4b91569c6a35 in namespace emptydir-wrapper-6817, will wait for the garbage collector to delete the pods May 7 01:20:46.072: INFO: Deleting ReplicationController wrapped-volume-race-cf5eb338-7861-4774-a4a2-4b91569c6a35 took: 13.421407ms May 7 01:20:46.372: INFO: Terminating ReplicationController wrapped-volume-race-cf5eb338-7861-4774-a4a2-4b91569c6a35 pods took: 300.257225ms STEP: Creating RC which spawns configmap-volume pods May 7 01:20:55.244: INFO: Pod name wrapped-volume-race-2436bd4d-e023-4885-b02c-d314a314420d: Found 0 pods out of 5 May 7 01:21:00.930: INFO: Pod name wrapped-volume-race-2436bd4d-e023-4885-b02c-d314a314420d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2436bd4d-e023-4885-b02c-d314a314420d in namespace emptydir-wrapper-6817, will wait for the garbage collector to delete the pods May 7 01:21:17.383: INFO: Deleting ReplicationController wrapped-volume-race-2436bd4d-e023-4885-b02c-d314a314420d took: 6.667269ms May 7 01:21:17.683: INFO: Terminating ReplicationController wrapped-volume-race-2436bd4d-e023-4885-b02c-d314a314420d pods took: 300.273424ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:21:35.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6817" for this suite. • [SLOW TEST:104.492 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":244,"skipped":4015,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:21:35.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5328 May 7 01:21:39.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5328 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 7 01:21:43.175: INFO: stderr: "I0507 01:21:43.072514 2784 log.go:172] (0xc00003a840) (0xc00025ef00) Create stream\nI0507 01:21:43.072570 2784 log.go:172] (0xc00003a840) (0xc00025ef00) Stream added, broadcasting: 1\nI0507 01:21:43.075543 2784 log.go:172] (0xc00003a840) Reply frame received for 1\nI0507 01:21:43.075583 2784 log.go:172] (0xc00003a840) (0xc00025f180) Create stream\nI0507 01:21:43.075595 2784 log.go:172] (0xc00003a840) (0xc00025f180) Stream added, broadcasting: 3\nI0507 01:21:43.076680 2784 log.go:172] (0xc00003a840) Reply frame received for 3\nI0507 01:21:43.076711 2784 log.go:172] (0xc00003a840) (0xc00025fea0) Create stream\nI0507 01:21:43.076726 2784 log.go:172] (0xc00003a840) (0xc00025fea0) Stream added, broadcasting: 5\nI0507 01:21:43.078836 2784 log.go:172] (0xc00003a840) Reply frame received for 5\nI0507 01:21:43.160790 2784 log.go:172] (0xc00003a840) Data frame received for 5\nI0507 01:21:43.160817 2784 log.go:172] (0xc00025fea0) (5) Data frame handling\nI0507 01:21:43.160839 2784 log.go:172] (0xc00025fea0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0507 01:21:43.166828 2784 log.go:172] (0xc00003a840) Data frame received for 3\nI0507 01:21:43.166864 2784 log.go:172] (0xc00025f180) (3) Data frame handling\nI0507 01:21:43.166887 2784 log.go:172] (0xc00025f180) (3) Data frame sent\nI0507 01:21:43.167492 2784 log.go:172] (0xc00003a840) Data frame received for 3\nI0507 01:21:43.167514 2784 log.go:172] (0xc00025f180) (3) Data frame handling\nI0507 01:21:43.167546 2784 log.go:172] (0xc00003a840) Data frame received for 5\nI0507 01:21:43.167572 2784 log.go:172] (0xc00025fea0) (5) Data frame handling\nI0507 01:21:43.169058 2784 log.go:172] (0xc00003a840) Data frame received for 1\nI0507 01:21:43.169081 2784 log.go:172] (0xc00025ef00) (1) Data frame handling\nI0507 01:21:43.169096 2784 log.go:172] (0xc00025ef00) (1) Data frame sent\nI0507 01:21:43.169264 2784 log.go:172] (0xc00003a840) (0xc00025ef00) Stream removed, broadcasting: 1\nI0507 01:21:43.169435 2784 log.go:172] (0xc00003a840) Go away received\nI0507 01:21:43.169572 2784 log.go:172] (0xc00003a840) (0xc00025ef00) Stream removed, broadcasting: 1\nI0507 01:21:43.169588 2784 log.go:172] (0xc00003a840) (0xc00025f180) Stream removed, broadcasting: 3\nI0507 01:21:43.169597 2784 log.go:172] (0xc00003a840) (0xc00025fea0) Stream removed, broadcasting: 5\n" May 7 01:21:43.175: INFO: stdout: "iptables" May 7 01:21:43.175: INFO: proxyMode: iptables May 7 01:21:43.212: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 01:21:43.296: INFO: Pod kube-proxy-mode-detector still exists May 7 01:21:45.296: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 01:21:45.300: INFO: Pod kube-proxy-mode-detector still exists May 7 01:21:47.296: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 01:21:47.301: INFO: Pod kube-proxy-mode-detector still exists May 7 01:21:49.296: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 01:21:49.301: INFO: Pod kube-proxy-mode-detector still exists May 7 01:21:51.296: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 01:21:51.301: INFO: Pod kube-proxy-mode-detector still exists May 7 01:21:53.296: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 01:21:53.300: INFO: Pod kube-proxy-mode-detector still exists May 7 01:21:55.296: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 7 01:21:55.300: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-5328 STEP: creating replication controller affinity-nodeport-timeout in namespace services-5328 I0507 01:21:55.351828 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5328, replica count: 3 I0507 01:21:58.402234 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 01:22:01.402503 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 01:22:01.414: INFO: Creating new exec pod May 7 01:22:06.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5328 execpod-affinityfkmqw -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 7 01:22:06.675: INFO: stderr: "I0507 01:22:06.578842 2821 log.go:172] (0xc00054a000) (0xc0005c6e60) Create stream\nI0507 01:22:06.578894 2821 log.go:172] (0xc00054a000) (0xc0005c6e60) Stream added, broadcasting: 1\nI0507 01:22:06.581637 2821 log.go:172] (0xc00054a000) Reply frame received for 1\nI0507 01:22:06.581680 2821 log.go:172] (0xc00054a000) (0xc00050ac80) Create stream\nI0507 01:22:06.581695 2821 log.go:172] (0xc00054a000) (0xc00050ac80) Stream added, broadcasting: 3\nI0507 01:22:06.582758 2821 log.go:172] (0xc00054a000) Reply frame received for 3\nI0507 01:22:06.582792 2821 log.go:172] (0xc00054a000) (0xc000137f40) Create stream\nI0507 01:22:06.582805 2821 log.go:172] (0xc00054a000) (0xc000137f40) Stream added, broadcasting: 5\nI0507 01:22:06.583863 2821 log.go:172] (0xc00054a000) Reply frame received for 5\nI0507 01:22:06.669636 2821 log.go:172] (0xc00054a000) Data frame received for 5\nI0507 01:22:06.669704 2821 log.go:172] (0xc000137f40) (5) Data frame handling\nI0507 01:22:06.669728 2821 log.go:172] (0xc000137f40) (5) Data frame sent\nI0507 01:22:06.669744 2821 log.go:172] (0xc00054a000) Data frame received for 5\nI0507 01:22:06.669757 2821 log.go:172] (0xc000137f40) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0507 01:22:06.669805 2821 log.go:172] (0xc00054a000) Data frame received for 3\nI0507 01:22:06.669834 2821 log.go:172] (0xc00050ac80) (3) Data frame handling\nI0507 01:22:06.670918 2821 log.go:172] (0xc00054a000) Data frame received for 1\nI0507 01:22:06.670932 2821 log.go:172] (0xc0005c6e60) (1) Data frame handling\nI0507 01:22:06.670939 2821 log.go:172] (0xc0005c6e60) (1) Data frame sent\nI0507 01:22:06.670947 2821 log.go:172] (0xc00054a000) (0xc0005c6e60) Stream removed, broadcasting: 1\nI0507 01:22:06.670997 2821 log.go:172] (0xc00054a000) Go away received\nI0507 01:22:06.671172 2821 log.go:172] (0xc00054a000) (0xc0005c6e60) Stream removed, broadcasting: 1\nI0507 01:22:06.671183 2821 log.go:172] (0xc00054a000) (0xc00050ac80) Stream removed, broadcasting: 3\nI0507 01:22:06.671191 2821 log.go:172] (0xc00054a000) (0xc000137f40) Stream removed, broadcasting: 5\n" May 7 01:22:06.675: INFO: stdout: "" May 7 01:22:06.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5328 execpod-affinityfkmqw -- /bin/sh -x -c nc -zv -t -w 2 10.102.212.181 80' May 7 01:22:06.906: INFO: stderr: "I0507 01:22:06.829586 2842 log.go:172] (0xc0000ec370) (0xc0000fed20) Create stream\nI0507 01:22:06.829663 2842 log.go:172] (0xc0000ec370) (0xc0000fed20) Stream added, broadcasting: 1\nI0507 01:22:06.831838 2842 log.go:172] (0xc0000ec370) Reply frame received for 1\nI0507 01:22:06.831891 2842 log.go:172] (0xc0000ec370) (0xc00016d720) Create stream\nI0507 01:22:06.831908 2842 log.go:172] (0xc0000ec370) (0xc00016d720) Stream added, broadcasting: 3\nI0507 01:22:06.832881 2842 log.go:172] (0xc0000ec370) Reply frame received for 3\nI0507 01:22:06.832902 2842 log.go:172] (0xc0000ec370) (0xc00030e140) Create stream\nI0507 01:22:06.832913 2842 log.go:172] (0xc0000ec370) (0xc00030e140) Stream added, broadcasting: 5\nI0507 01:22:06.834261 2842 log.go:172] (0xc0000ec370) Reply frame received for 5\nI0507 01:22:06.898250 2842 log.go:172] (0xc0000ec370) Data frame received for 3\nI0507 01:22:06.898272 2842 log.go:172] (0xc00016d720) (3) Data frame handling\nI0507 01:22:06.898417 2842 log.go:172] (0xc0000ec370) Data frame received for 5\nI0507 01:22:06.898441 2842 log.go:172] (0xc00030e140) (5) Data frame handling\nI0507 01:22:06.898462 2842 log.go:172] (0xc00030e140) (5) Data frame sent\nI0507 01:22:06.898478 2842 log.go:172] (0xc0000ec370) Data frame received for 5\nI0507 01:22:06.898495 2842 log.go:172] (0xc00030e140) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.212.181 80\nConnection to 10.102.212.181 80 port [tcp/http] succeeded!\nI0507 01:22:06.900104 2842 log.go:172] (0xc0000ec370) Data frame received for 1\nI0507 01:22:06.900114 2842 log.go:172] (0xc0000fed20) (1) Data frame handling\nI0507 01:22:06.900121 2842 log.go:172] (0xc0000fed20) (1) Data frame sent\nI0507 01:22:06.900315 2842 log.go:172] (0xc0000ec370) (0xc0000fed20) Stream removed, broadcasting: 1\nI0507 01:22:06.900469 2842 log.go:172] (0xc0000ec370) Go away received\nI0507 01:22:06.900878 2842 log.go:172] (0xc0000ec370) (0xc0000fed20) Stream removed, broadcasting: 1\nI0507 01:22:06.900900 2842 log.go:172] (0xc0000ec370) (0xc00016d720) Stream removed, broadcasting: 3\nI0507 01:22:06.900912 2842 log.go:172] (0xc0000ec370) (0xc00030e140) Stream removed, broadcasting: 5\n" May 7 01:22:06.906: INFO: stdout: "" May 7 01:22:06.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5328 execpod-affinityfkmqw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30480' May 7 01:22:07.084: INFO: stderr: "I0507 01:22:07.022154 2863 log.go:172] (0xc000b1a210) (0xc0009548c0) Create stream\nI0507 01:22:07.022215 2863 log.go:172] (0xc000b1a210) (0xc0009548c0) Stream added, broadcasting: 1\nI0507 01:22:07.025675 2863 log.go:172] (0xc000b1a210) Reply frame received for 1\nI0507 01:22:07.025749 2863 log.go:172] (0xc000b1a210) (0xc0009465a0) Create stream\nI0507 01:22:07.025781 2863 log.go:172] (0xc000b1a210) (0xc0009465a0) Stream added, broadcasting: 3\nI0507 01:22:07.026916 2863 log.go:172] (0xc000b1a210) Reply frame received for 3\nI0507 01:22:07.026953 2863 log.go:172] (0xc000b1a210) (0xc000955cc0) Create stream\nI0507 01:22:07.026980 2863 log.go:172] (0xc000b1a210) (0xc000955cc0) Stream added, broadcasting: 5\nI0507 01:22:07.027951 2863 log.go:172] (0xc000b1a210) Reply frame received for 5\nI0507 01:22:07.077855 2863 log.go:172] (0xc000b1a210) Data frame received for 3\nI0507 01:22:07.077897 2863 log.go:172] (0xc0009465a0) (3) Data frame handling\nI0507 01:22:07.077933 2863 log.go:172] (0xc000b1a210) Data frame received for 5\nI0507 01:22:07.077959 2863 log.go:172] (0xc000955cc0) (5) Data frame handling\nI0507 01:22:07.077984 2863 log.go:172] (0xc000955cc0) (5) Data frame sent\nI0507 01:22:07.078008 2863 log.go:172] (0xc000b1a210) Data frame received for 5\nI0507 01:22:07.078033 2863 log.go:172] (0xc000955cc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30480\nConnection to 172.17.0.13 30480 port [tcp/30480] succeeded!\nI0507 01:22:07.079208 2863 log.go:172] (0xc000b1a210) Data frame received for 1\nI0507 01:22:07.079232 2863 log.go:172] (0xc0009548c0) (1) Data frame handling\nI0507 01:22:07.079252 2863 log.go:172] (0xc0009548c0) (1) Data frame sent\nI0507 01:22:07.079269 2863 log.go:172] (0xc000b1a210) (0xc0009548c0) Stream removed, broadcasting: 1\nI0507 01:22:07.079283 2863 log.go:172] (0xc000b1a210) Go away received\nI0507 01:22:07.079622 2863 log.go:172] (0xc000b1a210) (0xc0009548c0) Stream removed, broadcasting: 1\nI0507 01:22:07.079636 2863 log.go:172] (0xc000b1a210) (0xc0009465a0) Stream removed, broadcasting: 3\nI0507 01:22:07.079643 2863 log.go:172] (0xc000b1a210) (0xc000955cc0) Stream removed, broadcasting: 5\n" May 7 01:22:07.084: INFO: stdout: "" May 7 01:22:07.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5328 execpod-affinityfkmqw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30480' May 7 01:22:07.293: INFO: stderr: "I0507 01:22:07.221782 2883 log.go:172] (0xc0009b0790) (0xc00084cf00) Create stream\nI0507 01:22:07.221912 2883 log.go:172] (0xc0009b0790) (0xc00084cf00) Stream added, broadcasting: 1\nI0507 01:22:07.230691 2883 log.go:172] (0xc0009b0790) Reply frame received for 1\nI0507 01:22:07.230743 2883 log.go:172] (0xc0009b0790) (0xc00081cfa0) Create stream\nI0507 01:22:07.230761 2883 log.go:172] (0xc0009b0790) (0xc00081cfa0) Stream added, broadcasting: 3\nI0507 01:22:07.231743 2883 log.go:172] (0xc0009b0790) Reply frame received for 3\nI0507 01:22:07.231783 2883 log.go:172] (0xc0009b0790) (0xc00084d4a0) Create stream\nI0507 01:22:07.231796 2883 log.go:172] (0xc0009b0790) (0xc00084d4a0) Stream added, broadcasting: 5\nI0507 01:22:07.232537 2883 log.go:172] (0xc0009b0790) Reply frame received for 5\nI0507 01:22:07.285341 2883 log.go:172] (0xc0009b0790) Data frame received for 5\nI0507 01:22:07.285386 2883 log.go:172] (0xc00084d4a0) (5) Data frame handling\nI0507 01:22:07.285417 2883 log.go:172] (0xc00084d4a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30480\nI0507 01:22:07.285807 2883 log.go:172] (0xc0009b0790) Data frame received for 5\nI0507 01:22:07.285828 2883 log.go:172] (0xc00084d4a0) (5) Data frame handling\nI0507 01:22:07.285837 2883 log.go:172] (0xc00084d4a0) (5) Data frame sent\nConnection to 172.17.0.12 30480 port [tcp/30480] succeeded!\nI0507 01:22:07.286252 2883 log.go:172] (0xc0009b0790) Data frame received for 3\nI0507 01:22:07.286274 2883 log.go:172] (0xc00081cfa0) (3) Data frame handling\nI0507 01:22:07.286673 2883 log.go:172] (0xc0009b0790) Data frame received for 5\nI0507 01:22:07.286706 2883 log.go:172] (0xc00084d4a0) (5) Data frame handling\nI0507 01:22:07.288180 2883 log.go:172] (0xc0009b0790) Data frame received for 1\nI0507 01:22:07.288194 2883 log.go:172] (0xc00084cf00) (1) Data frame handling\nI0507 01:22:07.288203 2883 log.go:172] (0xc00084cf00) (1) Data frame sent\nI0507 01:22:07.288212 2883 log.go:172] (0xc0009b0790) (0xc00084cf00) Stream removed, broadcasting: 1\nI0507 01:22:07.288259 2883 log.go:172] (0xc0009b0790) Go away received\nI0507 01:22:07.288503 2883 log.go:172] (0xc0009b0790) (0xc00084cf00) Stream removed, broadcasting: 1\nI0507 01:22:07.288515 2883 log.go:172] (0xc0009b0790) (0xc00081cfa0) Stream removed, broadcasting: 3\nI0507 01:22:07.288522 2883 log.go:172] (0xc0009b0790) (0xc00084d4a0) Stream removed, broadcasting: 5\n" May 7 01:22:07.293: INFO: stdout: "" May 7 01:22:07.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5328 execpod-affinityfkmqw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30480/ ; done' May 7 01:22:07.589: INFO: stderr: "I0507 01:22:07.440512 2902 log.go:172] (0xc0008da000) (0xc000167680) Create stream\nI0507 01:22:07.440568 2902 log.go:172] (0xc0008da000) (0xc000167680) Stream added, broadcasting: 1\nI0507 01:22:07.442286 2902 log.go:172] (0xc0008da000) Reply frame received for 1\nI0507 01:22:07.442331 2902 log.go:172] (0xc0008da000) (0xc0006465a0) Create stream\nI0507 01:22:07.442346 2902 log.go:172] (0xc0008da000) (0xc0006465a0) Stream added, broadcasting: 3\nI0507 01:22:07.443106 2902 log.go:172] (0xc0008da000) Reply frame received for 3\nI0507 01:22:07.443141 2902 log.go:172] (0xc0008da000) (0xc000abe000) Create stream\nI0507 01:22:07.443156 2902 log.go:172] (0xc0008da000) (0xc000abe000) Stream added, broadcasting: 5\nI0507 01:22:07.443893 2902 log.go:172] (0xc0008da000) Reply frame received for 5\nI0507 01:22:07.499681 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.499707 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.499716 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.499746 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.499776 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.499797 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.503748 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.503782 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.503810 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.504364 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.504395 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.504416 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.504446 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.504460 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.504478 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.508178 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.508196 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.508233 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.508557 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.508573 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.508610 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.508650 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.508675 2902 log.go:172] (0xc000abe000) (5) Data frame sent\nI0507 01:22:07.508698 2902 log.go:172] (0xc0008da000) Data frame received for 5\n+ echo\nI0507 01:22:07.508723 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.508768 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.508787 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.512214 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.512230 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.512256 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.512747 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.512760 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.512766 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.512783 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.512796 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.512810 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.516384 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.516399 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.516405 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.516884 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.516903 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.516926 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.516946 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.516963 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.516977 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.523483 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.523507 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.523525 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.524018 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.524041 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.524054 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.524072 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.524081 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.524090 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.527537 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.527559 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.527577 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.528005 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.528033 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.528058 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.528090 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.528112 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.528143 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.536712 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.536735 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.536751 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.537001 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.537306 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.537335 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.537361 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.537384 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.537409 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.540887 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.540899 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.540909 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.541452 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.541477 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.541496 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.541519 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.541530 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.541543 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.546799 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.546830 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.546852 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.547297 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.547340 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.547381 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.547405 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.547424 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.547449 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.550783 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.550827 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.550895 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.551116 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.551151 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.551172 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.551197 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.551209 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.551261 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.555666 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.555686 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.555704 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.555893 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.555914 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.555922 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.555938 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.555949 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.555961 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.559741 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.559766 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.559800 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.560253 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.560275 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.560287 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.560306 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.560315 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.560325 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.564044 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.564062 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.564079 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.564637 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.564656 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.564666 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.564726 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.564748 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.564769 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.569631 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.569657 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.569688 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.574113 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.574134 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.574143 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.574154 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.574161 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.574167 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.577617 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.577629 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.577636 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.577982 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.577996 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.578011 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.578047 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.578058 2902 log.go:172] (0xc000abe000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.578070 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.582339 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.582356 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.582367 2902 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0507 01:22:07.582988 2902 log.go:172] (0xc0008da000) Data frame received for 3\nI0507 01:22:07.583001 2902 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0507 01:22:07.583058 2902 log.go:172] (0xc0008da000) Data frame received for 5\nI0507 01:22:07.583070 2902 log.go:172] (0xc000abe000) (5) Data frame handling\nI0507 01:22:07.584895 2902 log.go:172] (0xc0008da000) Data frame received for 1\nI0507 01:22:07.584911 2902 log.go:172] (0xc000167680) (1) Data frame handling\nI0507 01:22:07.584919 2902 log.go:172] (0xc000167680) (1) Data frame sent\nI0507 01:22:07.584927 2902 log.go:172] (0xc0008da000) (0xc000167680) Stream removed, broadcasting: 1\nI0507 01:22:07.584970 2902 log.go:172] (0xc0008da000) Go away received\nI0507 01:22:07.585308 2902 log.go:172] (0xc0008da000) (0xc000167680) Stream removed, broadcasting: 1\nI0507 01:22:07.585322 2902 log.go:172] (0xc0008da000) (0xc0006465a0) Stream removed, broadcasting: 3\nI0507 01:22:07.585329 2902 log.go:172] (0xc0008da000) (0xc000abe000) Stream removed, broadcasting: 5\n" May 7 01:22:07.590: INFO: stdout: "\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47\naffinity-nodeport-timeout-pxg47" May 7 01:22:07.590: INFO: Received response from host: May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Received response from host: affinity-nodeport-timeout-pxg47 May 7 01:22:07.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5328 execpod-affinityfkmqw -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30480/' May 7 01:22:07.794: INFO: stderr: "I0507 01:22:07.713871 2921 log.go:172] (0xc00024cfd0) (0xc00039e640) Create stream\nI0507 01:22:07.713926 2921 log.go:172] (0xc00024cfd0) (0xc00039e640) Stream added, broadcasting: 1\nI0507 01:22:07.718659 2921 log.go:172] (0xc00024cfd0) Reply frame received for 1\nI0507 01:22:07.718707 2921 log.go:172] (0xc00024cfd0) (0xc00039edc0) Create stream\nI0507 01:22:07.718740 2921 log.go:172] (0xc00024cfd0) (0xc00039edc0) Stream added, broadcasting: 3\nI0507 01:22:07.719732 2921 log.go:172] (0xc00024cfd0) Reply frame received for 3\nI0507 01:22:07.719790 2921 log.go:172] (0xc00024cfd0) (0xc00039f360) Create stream\nI0507 01:22:07.719816 2921 log.go:172] (0xc00024cfd0) (0xc00039f360) Stream added, broadcasting: 5\nI0507 01:22:07.720919 2921 log.go:172] (0xc00024cfd0) Reply frame received for 5\nI0507 01:22:07.780350 2921 log.go:172] (0xc00024cfd0) Data frame received for 5\nI0507 01:22:07.780380 2921 log.go:172] (0xc00039f360) (5) Data frame handling\nI0507 01:22:07.780403 2921 log.go:172] (0xc00039f360) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:07.785950 2921 log.go:172] (0xc00024cfd0) Data frame received for 3\nI0507 01:22:07.785983 2921 log.go:172] (0xc00039edc0) (3) Data frame handling\nI0507 01:22:07.786008 2921 log.go:172] (0xc00039edc0) (3) Data frame sent\nI0507 01:22:07.786769 2921 log.go:172] (0xc00024cfd0) Data frame received for 5\nI0507 01:22:07.786810 2921 log.go:172] (0xc00039f360) (5) Data frame handling\nI0507 01:22:07.786853 2921 log.go:172] (0xc00024cfd0) Data frame received for 3\nI0507 01:22:07.786893 2921 log.go:172] (0xc00039edc0) (3) Data frame handling\nI0507 01:22:07.788413 2921 log.go:172] (0xc00024cfd0) Data frame received for 1\nI0507 01:22:07.788445 2921 log.go:172] (0xc00039e640) (1) Data frame handling\nI0507 01:22:07.788460 2921 log.go:172] (0xc00039e640) (1) Data frame sent\nI0507 01:22:07.788475 2921 log.go:172] (0xc00024cfd0) (0xc00039e640) Stream removed, broadcasting: 1\nI0507 01:22:07.788955 2921 log.go:172] (0xc00024cfd0) (0xc00039e640) Stream removed, broadcasting: 1\nI0507 01:22:07.788979 2921 log.go:172] (0xc00024cfd0) (0xc00039edc0) Stream removed, broadcasting: 3\nI0507 01:22:07.789530 2921 log.go:172] (0xc00024cfd0) (0xc00039f360) Stream removed, broadcasting: 5\n" May 7 01:22:07.794: INFO: stdout: "affinity-nodeport-timeout-pxg47" May 7 01:22:22.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5328 execpod-affinityfkmqw -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30480/' May 7 01:22:23.022: INFO: stderr: "I0507 01:22:22.928988 2941 log.go:172] (0xc000cc00b0) (0xc00052a320) Create stream\nI0507 01:22:22.929092 2941 log.go:172] (0xc000cc00b0) (0xc00052a320) Stream added, broadcasting: 1\nI0507 01:22:22.938836 2941 log.go:172] (0xc000cc00b0) Reply frame received for 1\nI0507 01:22:22.938881 2941 log.go:172] (0xc000cc00b0) (0xc00052b2c0) Create stream\nI0507 01:22:22.938892 2941 log.go:172] (0xc000cc00b0) (0xc00052b2c0) Stream added, broadcasting: 3\nI0507 01:22:22.940350 2941 log.go:172] (0xc000cc00b0) Reply frame received for 3\nI0507 01:22:22.940390 2941 log.go:172] (0xc000cc00b0) (0xc0004f0e60) Create stream\nI0507 01:22:22.940398 2941 log.go:172] (0xc000cc00b0) (0xc0004f0e60) Stream added, broadcasting: 5\nI0507 01:22:22.941340 2941 log.go:172] (0xc000cc00b0) Reply frame received for 5\nI0507 01:22:23.010445 2941 log.go:172] (0xc000cc00b0) Data frame received for 5\nI0507 01:22:23.010466 2941 log.go:172] (0xc0004f0e60) (5) Data frame handling\nI0507 01:22:23.010479 2941 log.go:172] (0xc0004f0e60) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30480/\nI0507 01:22:23.014751 2941 log.go:172] (0xc000cc00b0) Data frame received for 3\nI0507 01:22:23.014778 2941 log.go:172] (0xc00052b2c0) (3) Data frame handling\nI0507 01:22:23.014801 2941 log.go:172] (0xc00052b2c0) (3) Data frame sent\nI0507 01:22:23.015556 2941 log.go:172] (0xc000cc00b0) Data frame received for 3\nI0507 01:22:23.015575 2941 log.go:172] (0xc00052b2c0) (3) Data frame handling\nI0507 01:22:23.015819 2941 log.go:172] (0xc000cc00b0) Data frame received for 5\nI0507 01:22:23.015843 2941 log.go:172] (0xc0004f0e60) (5) Data frame handling\nI0507 01:22:23.017104 2941 log.go:172] (0xc000cc00b0) Data frame received for 1\nI0507 01:22:23.017255 2941 log.go:172] (0xc00052a320) (1) Data frame handling\nI0507 01:22:23.017275 2941 log.go:172] (0xc00052a320) (1) Data frame sent\nI0507 01:22:23.017531 2941 log.go:172] (0xc000cc00b0) (0xc00052a320) Stream removed, broadcasting: 1\nI0507 01:22:23.017554 2941 log.go:172] (0xc000cc00b0) Go away received\nI0507 01:22:23.017826 2941 log.go:172] (0xc000cc00b0) (0xc00052a320) Stream removed, broadcasting: 1\nI0507 01:22:23.017837 2941 log.go:172] (0xc000cc00b0) (0xc00052b2c0) Stream removed, broadcasting: 3\nI0507 01:22:23.017843 2941 log.go:172] (0xc000cc00b0) (0xc0004f0e60) Stream removed, broadcasting: 5\n" May 7 01:22:23.022: INFO: stdout: "affinity-nodeport-timeout-9kshn" May 7 01:22:23.022: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-5328, will wait for the garbage collector to delete the pods May 7 01:22:23.283: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 49.921881ms May 7 01:22:25.083: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 1.800247209s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:22:35.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5328" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:59.650 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":245,"skipped":4021,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:22:35.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9088 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-9088 May 7 01:22:35.608: INFO: Found 0 stateful pods, waiting for 1 May 7 01:22:45.613: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 7 01:22:45.632: INFO: Deleting all statefulset in ns statefulset-9088 May 7 01:22:45.634: INFO: Scaling statefulset ss to 0 May 7 01:22:55.789: INFO: Waiting for statefulset status.replicas updated to 0 May 7 01:22:55.792: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:22:55.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9088" for this suite. • [SLOW TEST:20.364 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":246,"skipped":4021,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:22:55.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:22:55.869: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:23:02.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8936" for this suite. • [SLOW TEST:6.338 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":247,"skipped":4022,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:23:02.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 01:23:02.702: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 01:23:04.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411382, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411382, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411382, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411382, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 01:23:07.861: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:23:08.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7477" for this suite. STEP: Destroying namespace "webhook-7477-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.420 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":248,"skipped":4044,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:23:08.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 7 01:23:08.732: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1562 /api/v1/namespaces/watch-1562/configmaps/e2e-watch-test-resource-version 9cd33eb8-b244-452a-8945-1357850043f2 2185871 0 2020-05-07 01:23:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-07 01:23:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 7 01:23:08.732: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1562 /api/v1/namespaces/watch-1562/configmaps/e2e-watch-test-resource-version 9cd33eb8-b244-452a-8945-1357850043f2 2185872 0 2020-05-07 01:23:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-07 01:23:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:23:08.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1562" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":249,"skipped":4046,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:23:09.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-2019/configmap-test-efb1d9a3-c429-4e3d-8f48-cafdcbbd27a1 STEP: Creating a pod to test consume configMaps May 7 01:23:09.574: INFO: Waiting up to 5m0s for pod "pod-configmaps-f986649f-4fd2-495e-82c2-efdadfdafda2" in namespace "configmap-2019" to be "Succeeded or Failed" May 7 01:23:09.590: INFO: Pod "pod-configmaps-f986649f-4fd2-495e-82c2-efdadfdafda2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.533623ms May 7 01:23:11.620: INFO: Pod "pod-configmaps-f986649f-4fd2-495e-82c2-efdadfdafda2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046933683s May 7 01:23:13.623: INFO: Pod "pod-configmaps-f986649f-4fd2-495e-82c2-efdadfdafda2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049592949s STEP: Saw pod success May 7 01:23:13.623: INFO: Pod "pod-configmaps-f986649f-4fd2-495e-82c2-efdadfdafda2" satisfied condition "Succeeded or Failed" May 7 01:23:13.628: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f986649f-4fd2-495e-82c2-efdadfdafda2 container env-test: STEP: delete the pod May 7 01:23:13.677: INFO: Waiting for pod pod-configmaps-f986649f-4fd2-495e-82c2-efdadfdafda2 to disappear May 7 01:23:13.682: INFO: Pod pod-configmaps-f986649f-4fd2-495e-82c2-efdadfdafda2 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:23:13.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2019" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":250,"skipped":4049,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:23:13.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5434 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 01:23:13.776: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 7 01:23:13.879: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 01:23:16.064: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 01:23:17.889: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 7 01:23:19.902: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:23:21.884: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:23:23.884: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:23:25.902: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:23:27.884: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:23:29.884: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:23:31.884: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:23:33.883: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:23:35.883: INFO: The status of Pod netserver-0 is Running (Ready = false) May 7 01:23:37.883: INFO: The status of Pod netserver-0 is Running (Ready = true) May 7 01:23:37.890: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 7 01:23:41.937: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.196:8080/dial?request=hostname&protocol=udp&host=10.244.1.195&port=8081&tries=1'] Namespace:pod-network-test-5434 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:23:41.937: INFO: >>> kubeConfig: /root/.kube/config I0507 01:23:41.970847 7 log.go:172] (0xc001fd60b0) (0xc0018cba40) Create stream I0507 01:23:41.970877 7 log.go:172] (0xc001fd60b0) (0xc0018cba40) Stream added, broadcasting: 1 I0507 01:23:41.972367 7 log.go:172] (0xc001fd60b0) Reply frame received for 1 I0507 01:23:41.972413 7 log.go:172] (0xc001fd60b0) (0xc0018cbb80) Create stream I0507 01:23:41.972427 7 log.go:172] (0xc001fd60b0) (0xc0018cbb80) Stream added, broadcasting: 3 I0507 01:23:41.973489 7 log.go:172] (0xc001fd60b0) Reply frame received for 3 I0507 01:23:41.973521 7 log.go:172] (0xc001fd60b0) (0xc00154c320) Create stream I0507 01:23:41.973533 7 log.go:172] (0xc001fd60b0) (0xc00154c320) Stream added, broadcasting: 5 I0507 01:23:41.974332 7 log.go:172] (0xc001fd60b0) Reply frame received for 5 I0507 01:23:42.051581 7 log.go:172] (0xc001fd60b0) Data frame received for 3 I0507 01:23:42.051618 7 log.go:172] (0xc0018cbb80) (3) Data frame handling I0507 01:23:42.051640 7 log.go:172] (0xc0018cbb80) (3) Data frame sent I0507 01:23:42.052263 7 log.go:172] (0xc001fd60b0) Data frame received for 3 I0507 01:23:42.052291 7 log.go:172] (0xc0018cbb80) (3) Data frame handling I0507 01:23:42.052313 7 log.go:172] (0xc001fd60b0) Data frame received for 5 I0507 01:23:42.052321 7 log.go:172] (0xc00154c320) (5) Data frame handling I0507 01:23:42.053877 7 log.go:172] (0xc001fd60b0) Data frame received for 1 I0507 01:23:42.053900 7 log.go:172] (0xc0018cba40) (1) Data frame handling I0507 01:23:42.053918 7 log.go:172] (0xc0018cba40) (1) Data frame sent I0507 01:23:42.053931 7 log.go:172] (0xc001fd60b0) (0xc0018cba40) Stream removed, broadcasting: 1 I0507 01:23:42.054041 7 log.go:172] (0xc001fd60b0) (0xc0018cba40) Stream removed, broadcasting: 1 I0507 01:23:42.054065 7 log.go:172] (0xc001fd60b0) (0xc0018cbb80) Stream removed, broadcasting: 3 I0507 01:23:42.054079 7 log.go:172] (0xc001fd60b0) (0xc00154c320) Stream removed, broadcasting: 5 May 7 01:23:42.054: INFO: Waiting for responses: map[] I0507 01:23:42.054169 7 log.go:172] (0xc001fd60b0) Go away received May 7 01:23:42.057: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.196:8080/dial?request=hostname&protocol=udp&host=10.244.2.13&port=8081&tries=1'] Namespace:pod-network-test-5434 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 01:23:42.057: INFO: >>> kubeConfig: /root/.kube/config I0507 01:23:42.085036 7 log.go:172] (0xc002843ef0) (0xc001275680) Create stream I0507 01:23:42.085067 7 log.go:172] (0xc002843ef0) (0xc001275680) Stream added, broadcasting: 1 I0507 01:23:42.093833 7 log.go:172] (0xc002843ef0) Reply frame received for 1 I0507 01:23:42.093879 7 log.go:172] (0xc002843ef0) (0xc001275860) Create stream I0507 01:23:42.093892 7 log.go:172] (0xc002843ef0) (0xc001275860) Stream added, broadcasting: 3 I0507 01:23:42.094784 7 log.go:172] (0xc002843ef0) Reply frame received for 3 I0507 01:23:42.094821 7 log.go:172] (0xc002843ef0) (0xc001275a40) Create stream I0507 01:23:42.094836 7 log.go:172] (0xc002843ef0) (0xc001275a40) Stream added, broadcasting: 5 I0507 01:23:42.096565 7 log.go:172] (0xc002843ef0) Reply frame received for 5 I0507 01:23:42.156597 7 log.go:172] (0xc002843ef0) Data frame received for 3 I0507 01:23:42.156622 7 log.go:172] (0xc001275860) (3) Data frame handling I0507 01:23:42.156637 7 log.go:172] (0xc001275860) (3) Data frame sent I0507 01:23:42.156830 7 log.go:172] (0xc002843ef0) Data frame received for 3 I0507 01:23:42.156853 7 log.go:172] (0xc001275860) (3) Data frame handling I0507 01:23:42.157389 7 log.go:172] (0xc002843ef0) Data frame received for 5 I0507 01:23:42.157410 7 log.go:172] (0xc001275a40) (5) Data frame handling I0507 01:23:42.158843 7 log.go:172] (0xc002843ef0) Data frame received for 1 I0507 01:23:42.158883 7 log.go:172] (0xc001275680) (1) Data frame handling I0507 01:23:42.158924 7 log.go:172] (0xc001275680) (1) Data frame sent I0507 01:23:42.158947 7 log.go:172] (0xc002843ef0) (0xc001275680) Stream removed, broadcasting: 1 I0507 01:23:42.158969 7 log.go:172] (0xc002843ef0) Go away received I0507 01:23:42.159116 7 log.go:172] (0xc002843ef0) (0xc001275680) Stream removed, broadcasting: 1 I0507 01:23:42.159157 7 log.go:172] (0xc002843ef0) (0xc001275860) Stream removed, broadcasting: 3 I0507 01:23:42.159170 7 log.go:172] (0xc002843ef0) (0xc001275a40) Stream removed, broadcasting: 5 May 7 01:23:42.159: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:23:42.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5434" for this suite. • [SLOW TEST:28.477 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":251,"skipped":4064,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:23:42.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-3aafb281-708e-4319-928d-41317af43dd1 STEP: Creating a pod to test consume secrets May 7 01:23:42.278: INFO: Waiting up to 5m0s for pod "pod-secrets-44e074f0-7fd5-4852-888b-e32e27a2df07" in namespace "secrets-6107" to be "Succeeded or Failed" May 7 01:23:42.294: INFO: Pod "pod-secrets-44e074f0-7fd5-4852-888b-e32e27a2df07": Phase="Pending", Reason="", readiness=false. Elapsed: 15.592589ms May 7 01:23:44.300: INFO: Pod "pod-secrets-44e074f0-7fd5-4852-888b-e32e27a2df07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021558606s May 7 01:23:46.304: INFO: Pod "pod-secrets-44e074f0-7fd5-4852-888b-e32e27a2df07": Phase="Running", Reason="", readiness=true. Elapsed: 4.025656988s May 7 01:23:48.308: INFO: Pod "pod-secrets-44e074f0-7fd5-4852-888b-e32e27a2df07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029546985s STEP: Saw pod success May 7 01:23:48.308: INFO: Pod "pod-secrets-44e074f0-7fd5-4852-888b-e32e27a2df07" satisfied condition "Succeeded or Failed" May 7 01:23:48.311: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-44e074f0-7fd5-4852-888b-e32e27a2df07 container secret-volume-test: STEP: delete the pod May 7 01:23:48.570: INFO: Waiting for pod pod-secrets-44e074f0-7fd5-4852-888b-e32e27a2df07 to disappear May 7 01:23:48.624: INFO: Pod pod-secrets-44e074f0-7fd5-4852-888b-e32e27a2df07 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:23:48.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6107" for this suite. • [SLOW TEST:6.537 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":252,"skipped":4082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:23:48.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-be286e06-4056-4600-8abf-491b03f3eb4b STEP: Creating a pod to test consume configMaps May 7 01:23:48.940: INFO: Waiting up to 5m0s for pod "pod-configmaps-862eaff6-c9ea-4ec1-b2ac-6669cbc19e75" in namespace "configmap-1690" to be "Succeeded or Failed" May 7 01:23:48.967: INFO: Pod "pod-configmaps-862eaff6-c9ea-4ec1-b2ac-6669cbc19e75": Phase="Pending", Reason="", readiness=false. Elapsed: 26.975639ms May 7 01:23:50.971: INFO: Pod "pod-configmaps-862eaff6-c9ea-4ec1-b2ac-6669cbc19e75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031431509s May 7 01:23:52.975: INFO: Pod "pod-configmaps-862eaff6-c9ea-4ec1-b2ac-6669cbc19e75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035388672s STEP: Saw pod success May 7 01:23:52.975: INFO: Pod "pod-configmaps-862eaff6-c9ea-4ec1-b2ac-6669cbc19e75" satisfied condition "Succeeded or Failed" May 7 01:23:52.978: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-862eaff6-c9ea-4ec1-b2ac-6669cbc19e75 container configmap-volume-test: STEP: delete the pod May 7 01:23:53.012: INFO: Waiting for pod pod-configmaps-862eaff6-c9ea-4ec1-b2ac-6669cbc19e75 to disappear May 7 01:23:53.051: INFO: Pod pod-configmaps-862eaff6-c9ea-4ec1-b2ac-6669cbc19e75 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:23:53.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1690" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4105,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:23:53.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3099 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-3099 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3099 May 7 01:23:53.195: INFO: Found 0 stateful pods, waiting for 1 May 7 01:24:03.214: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 7 01:24:03.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3099 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 01:24:03.680: INFO: stderr: "I0507 01:24:03.554355 2962 log.go:172] (0xc0008fd550) (0xc000b54460) Create stream\nI0507 01:24:03.554398 2962 log.go:172] (0xc0008fd550) (0xc000b54460) Stream added, broadcasting: 1\nI0507 01:24:03.558884 2962 log.go:172] (0xc0008fd550) Reply frame received for 1\nI0507 01:24:03.558933 2962 log.go:172] (0xc0008fd550) (0xc0006e6f00) Create stream\nI0507 01:24:03.558949 2962 log.go:172] (0xc0008fd550) (0xc0006e6f00) Stream added, broadcasting: 3\nI0507 01:24:03.559997 2962 log.go:172] (0xc0008fd550) Reply frame received for 3\nI0507 01:24:03.560042 2962 log.go:172] (0xc0008fd550) (0xc00065c5a0) Create stream\nI0507 01:24:03.560062 2962 log.go:172] (0xc0008fd550) (0xc00065c5a0) Stream added, broadcasting: 5\nI0507 01:24:03.561076 2962 log.go:172] (0xc0008fd550) Reply frame received for 5\nI0507 01:24:03.631130 2962 log.go:172] (0xc0008fd550) Data frame received for 5\nI0507 01:24:03.631154 2962 log.go:172] (0xc00065c5a0) (5) Data frame handling\nI0507 01:24:03.631167 2962 log.go:172] (0xc00065c5a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 01:24:03.673964 2962 log.go:172] (0xc0008fd550) Data frame received for 3\nI0507 01:24:03.674010 2962 log.go:172] (0xc0006e6f00) (3) Data frame handling\nI0507 01:24:03.674036 2962 log.go:172] (0xc0006e6f00) (3) Data frame sent\nI0507 01:24:03.674051 2962 log.go:172] (0xc0008fd550) Data frame received for 3\nI0507 01:24:03.674061 2962 log.go:172] (0xc0006e6f00) (3) Data frame handling\nI0507 01:24:03.674295 2962 log.go:172] (0xc0008fd550) Data frame received for 5\nI0507 01:24:03.674327 2962 log.go:172] (0xc00065c5a0) (5) Data frame handling\nI0507 01:24:03.675749 2962 log.go:172] (0xc0008fd550) Data frame received for 1\nI0507 01:24:03.675791 2962 log.go:172] (0xc000b54460) (1) Data frame handling\nI0507 01:24:03.675820 2962 log.go:172] (0xc000b54460) (1) Data frame sent\nI0507 01:24:03.675850 2962 log.go:172] (0xc0008fd550) (0xc000b54460) Stream removed, broadcasting: 1\nI0507 01:24:03.675880 2962 log.go:172] (0xc0008fd550) Go away received\nI0507 01:24:03.676161 2962 log.go:172] (0xc0008fd550) (0xc000b54460) Stream removed, broadcasting: 1\nI0507 01:24:03.676181 2962 log.go:172] (0xc0008fd550) (0xc0006e6f00) Stream removed, broadcasting: 3\nI0507 01:24:03.676194 2962 log.go:172] (0xc0008fd550) (0xc00065c5a0) Stream removed, broadcasting: 5\n" May 7 01:24:03.680: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 01:24:03.680: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 7 01:24:03.684: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 7 01:24:13.688: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 7 01:24:13.688: INFO: Waiting for statefulset status.replicas updated to 0 May 7 01:24:13.713: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:13.713: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:13.713: INFO: May 7 01:24:13.713: INFO: StatefulSet ss has not reached scale 3, at 1 May 7 01:24:14.718: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985537751s May 7 01:24:16.245: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980400345s May 7 01:24:17.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.45392441s May 7 01:24:18.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.303465965s May 7 01:24:19.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.297580748s May 7 01:24:20.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.292296915s May 7 01:24:21.416: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.287755214s May 7 01:24:22.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.282615506s May 7 01:24:23.436: INFO: Verifying statefulset ss doesn't scale past 3 for another 268.806439ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3099 May 7 01:24:24.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3099 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 7 01:24:24.694: INFO: stderr: "I0507 01:24:24.595146 2983 log.go:172] (0xc00051e210) (0xc0000efb80) Create stream\nI0507 01:24:24.595225 2983 log.go:172] (0xc00051e210) (0xc0000efb80) Stream added, broadcasting: 1\nI0507 01:24:24.598240 2983 log.go:172] (0xc00051e210) Reply frame received for 1\nI0507 01:24:24.598280 2983 log.go:172] (0xc00051e210) (0xc0001561e0) Create stream\nI0507 01:24:24.598293 2983 log.go:172] (0xc00051e210) (0xc0001561e0) Stream added, broadcasting: 3\nI0507 01:24:24.599257 2983 log.go:172] (0xc00051e210) Reply frame received for 3\nI0507 01:24:24.599290 2983 log.go:172] (0xc00051e210) (0xc000157860) Create stream\nI0507 01:24:24.599300 2983 log.go:172] (0xc00051e210) (0xc000157860) Stream added, broadcasting: 5\nI0507 01:24:24.600127 2983 log.go:172] (0xc00051e210) Reply frame received for 5\nI0507 01:24:24.686635 2983 log.go:172] (0xc00051e210) Data frame received for 5\nI0507 01:24:24.686693 2983 log.go:172] (0xc000157860) (5) Data frame handling\nI0507 01:24:24.686715 2983 log.go:172] (0xc000157860) (5) Data frame sent\nI0507 01:24:24.686731 2983 log.go:172] (0xc00051e210) Data frame received for 5\nI0507 01:24:24.686740 2983 log.go:172] (0xc000157860) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0507 01:24:24.686779 2983 log.go:172] (0xc00051e210) Data frame received for 3\nI0507 01:24:24.686802 2983 log.go:172] (0xc0001561e0) (3) Data frame handling\nI0507 01:24:24.686818 2983 log.go:172] (0xc0001561e0) (3) Data frame sent\nI0507 01:24:24.686839 2983 log.go:172] (0xc00051e210) Data frame received for 3\nI0507 01:24:24.686854 2983 log.go:172] (0xc0001561e0) (3) Data frame handling\nI0507 01:24:24.688330 2983 log.go:172] (0xc00051e210) Data frame received for 1\nI0507 01:24:24.688353 2983 log.go:172] (0xc0000efb80) (1) Data frame handling\nI0507 01:24:24.688362 2983 log.go:172] (0xc0000efb80) (1) Data frame sent\nI0507 01:24:24.688374 2983 log.go:172] (0xc00051e210) (0xc0000efb80) Stream removed, broadcasting: 1\nI0507 01:24:24.688385 2983 log.go:172] (0xc00051e210) Go away received\nI0507 01:24:24.688846 2983 log.go:172] (0xc00051e210) (0xc0000efb80) Stream removed, broadcasting: 1\nI0507 01:24:24.688873 2983 log.go:172] (0xc00051e210) (0xc0001561e0) Stream removed, broadcasting: 3\nI0507 01:24:24.688886 2983 log.go:172] (0xc00051e210) (0xc000157860) Stream removed, broadcasting: 5\n" May 7 01:24:24.694: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 7 01:24:24.694: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 7 01:24:24.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3099 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 7 01:24:24.945: INFO: stderr: "I0507 01:24:24.836060 3004 log.go:172] (0xc0009bc9a0) (0xc000601cc0) Create stream\nI0507 01:24:24.836166 3004 log.go:172] (0xc0009bc9a0) (0xc000601cc0) Stream added, broadcasting: 1\nI0507 01:24:24.839682 3004 log.go:172] (0xc0009bc9a0) Reply frame received for 1\nI0507 01:24:24.839734 3004 log.go:172] (0xc0009bc9a0) (0xc0005f85a0) Create stream\nI0507 01:24:24.839746 3004 log.go:172] (0xc0009bc9a0) (0xc0005f85a0) Stream added, broadcasting: 3\nI0507 01:24:24.840577 3004 log.go:172] (0xc0009bc9a0) Reply frame received for 3\nI0507 01:24:24.840603 3004 log.go:172] (0xc0009bc9a0) (0xc0004fa280) Create stream\nI0507 01:24:24.840610 3004 log.go:172] (0xc0009bc9a0) (0xc0004fa280) Stream added, broadcasting: 5\nI0507 01:24:24.841556 3004 log.go:172] (0xc0009bc9a0) Reply frame received for 5\nI0507 01:24:24.938044 3004 log.go:172] (0xc0009bc9a0) Data frame received for 3\nI0507 01:24:24.938093 3004 log.go:172] (0xc0005f85a0) (3) Data frame handling\nI0507 01:24:24.938109 3004 log.go:172] (0xc0005f85a0) (3) Data frame sent\nI0507 01:24:24.938120 3004 log.go:172] (0xc0009bc9a0) Data frame received for 3\nI0507 01:24:24.938143 3004 log.go:172] (0xc0009bc9a0) Data frame received for 5\nI0507 01:24:24.938171 3004 log.go:172] (0xc0004fa280) (5) Data frame handling\nI0507 01:24:24.938208 3004 log.go:172] (0xc0004fa280) (5) Data frame sent\nI0507 01:24:24.938227 3004 log.go:172] (0xc0009bc9a0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0507 01:24:24.938254 3004 log.go:172] (0xc0004fa280) (5) Data frame handling\nI0507 01:24:24.938313 3004 log.go:172] (0xc0005f85a0) (3) Data frame handling\nI0507 01:24:24.939538 3004 log.go:172] (0xc0009bc9a0) Data frame received for 1\nI0507 01:24:24.939572 3004 log.go:172] (0xc000601cc0) (1) Data frame handling\nI0507 01:24:24.939592 3004 log.go:172] (0xc000601cc0) (1) Data frame sent\nI0507 01:24:24.939609 3004 log.go:172] (0xc0009bc9a0) (0xc000601cc0) Stream removed, broadcasting: 1\nI0507 01:24:24.939687 3004 log.go:172] (0xc0009bc9a0) Go away received\nI0507 01:24:24.940000 3004 log.go:172] (0xc0009bc9a0) (0xc000601cc0) Stream removed, broadcasting: 1\nI0507 01:24:24.940023 3004 log.go:172] (0xc0009bc9a0) (0xc0005f85a0) Stream removed, broadcasting: 3\nI0507 01:24:24.940042 3004 log.go:172] (0xc0009bc9a0) (0xc0004fa280) Stream removed, broadcasting: 5\n" May 7 01:24:24.945: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 7 01:24:24.945: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 7 01:24:24.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 7 01:24:25.206: INFO: stderr: "I0507 01:24:25.118513 3024 log.go:172] (0xc000a43600) (0xc00085f400) Create stream\nI0507 01:24:25.118577 3024 log.go:172] (0xc000a43600) (0xc00085f400) Stream added, broadcasting: 1\nI0507 01:24:25.127612 3024 log.go:172] (0xc000a43600) Reply frame received for 1\nI0507 01:24:25.129457 3024 log.go:172] (0xc000a43600) (0xc00085fe00) Create stream\nI0507 01:24:25.129502 3024 log.go:172] (0xc000a43600) (0xc00085fe00) Stream added, broadcasting: 3\nI0507 01:24:25.130832 3024 log.go:172] (0xc000a43600) Reply frame received for 3\nI0507 01:24:25.130860 3024 log.go:172] (0xc000a43600) (0xc0006d8be0) Create stream\nI0507 01:24:25.130868 3024 log.go:172] (0xc000a43600) (0xc0006d8be0) Stream added, broadcasting: 5\nI0507 01:24:25.131545 3024 log.go:172] (0xc000a43600) Reply frame received for 5\nI0507 01:24:25.198307 3024 log.go:172] (0xc000a43600) Data frame received for 3\nI0507 01:24:25.198349 3024 log.go:172] (0xc00085fe00) (3) Data frame handling\nI0507 01:24:25.198379 3024 log.go:172] (0xc00085fe00) (3) Data frame sent\nI0507 01:24:25.198396 3024 log.go:172] (0xc000a43600) Data frame received for 3\nI0507 01:24:25.198420 3024 log.go:172] (0xc00085fe00) (3) Data frame handling\nI0507 01:24:25.198704 3024 log.go:172] (0xc000a43600) Data frame received for 5\nI0507 01:24:25.198745 3024 log.go:172] (0xc0006d8be0) (5) Data frame handling\nI0507 01:24:25.198768 3024 log.go:172] (0xc0006d8be0) (5) Data frame sent\nI0507 01:24:25.198783 3024 log.go:172] (0xc000a43600) Data frame received for 5\nI0507 01:24:25.198796 3024 log.go:172] (0xc0006d8be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0507 01:24:25.200077 3024 log.go:172] (0xc000a43600) Data frame received for 1\nI0507 01:24:25.200107 3024 log.go:172] (0xc00085f400) (1) Data frame handling\nI0507 01:24:25.200127 3024 log.go:172] (0xc00085f400) (1) Data frame sent\nI0507 01:24:25.200281 3024 log.go:172] (0xc000a43600) (0xc00085f400) Stream removed, broadcasting: 1\nI0507 01:24:25.200327 3024 log.go:172] (0xc000a43600) Go away received\nI0507 01:24:25.200791 3024 log.go:172] (0xc000a43600) (0xc00085f400) Stream removed, broadcasting: 1\nI0507 01:24:25.200815 3024 log.go:172] (0xc000a43600) (0xc00085fe00) Stream removed, broadcasting: 3\nI0507 01:24:25.200826 3024 log.go:172] (0xc000a43600) (0xc0006d8be0) Stream removed, broadcasting: 5\n" May 7 01:24:25.206: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 7 01:24:25.206: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 7 01:24:25.211: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 7 01:24:25.211: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 7 01:24:25.211: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 7 01:24:25.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3099 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 01:24:25.421: INFO: stderr: "I0507 01:24:25.347448 3043 log.go:172] (0xc000c11340) (0xc000af8280) Create stream\nI0507 01:24:25.347581 3043 log.go:172] (0xc000c11340) (0xc000af8280) Stream added, broadcasting: 1\nI0507 01:24:25.351851 3043 log.go:172] (0xc000c11340) Reply frame received for 1\nI0507 01:24:25.351890 3043 log.go:172] (0xc000c11340) (0xc00084cb40) Create stream\nI0507 01:24:25.351899 3043 log.go:172] (0xc000c11340) (0xc00084cb40) Stream added, broadcasting: 3\nI0507 01:24:25.352657 3043 log.go:172] (0xc000c11340) Reply frame received for 3\nI0507 01:24:25.352678 3043 log.go:172] (0xc000c11340) (0xc000842dc0) Create stream\nI0507 01:24:25.352685 3043 log.go:172] (0xc000c11340) (0xc000842dc0) Stream added, broadcasting: 5\nI0507 01:24:25.353488 3043 log.go:172] (0xc000c11340) Reply frame received for 5\nI0507 01:24:25.414501 3043 log.go:172] (0xc000c11340) Data frame received for 5\nI0507 01:24:25.414536 3043 log.go:172] (0xc000842dc0) (5) Data frame handling\nI0507 01:24:25.414549 3043 log.go:172] (0xc000842dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 01:24:25.414621 3043 log.go:172] (0xc000c11340) Data frame received for 3\nI0507 01:24:25.414667 3043 log.go:172] (0xc00084cb40) (3) Data frame handling\nI0507 01:24:25.414690 3043 log.go:172] (0xc00084cb40) (3) Data frame sent\nI0507 01:24:25.414713 3043 log.go:172] (0xc000c11340) Data frame received for 3\nI0507 01:24:25.414732 3043 log.go:172] (0xc00084cb40) (3) Data frame handling\nI0507 01:24:25.414799 3043 log.go:172] (0xc000c11340) Data frame received for 5\nI0507 01:24:25.414843 3043 log.go:172] (0xc000842dc0) (5) Data frame handling\nI0507 01:24:25.416288 3043 log.go:172] (0xc000c11340) Data frame received for 1\nI0507 01:24:25.416325 3043 log.go:172] (0xc000af8280) (1) Data frame handling\nI0507 01:24:25.416353 3043 log.go:172] (0xc000af8280) (1) Data frame sent\nI0507 01:24:25.416372 3043 log.go:172] (0xc000c11340) (0xc000af8280) Stream removed, broadcasting: 1\nI0507 01:24:25.416396 3043 log.go:172] (0xc000c11340) Go away received\nI0507 01:24:25.416830 3043 log.go:172] (0xc000c11340) (0xc000af8280) Stream removed, broadcasting: 1\nI0507 01:24:25.416866 3043 log.go:172] (0xc000c11340) (0xc00084cb40) Stream removed, broadcasting: 3\nI0507 01:24:25.416889 3043 log.go:172] (0xc000c11340) (0xc000842dc0) Stream removed, broadcasting: 5\n" May 7 01:24:25.422: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 01:24:25.422: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 7 01:24:25.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3099 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 01:24:25.669: INFO: stderr: "I0507 01:24:25.565638 3064 log.go:172] (0xc000997340) (0xc000d2e280) Create stream\nI0507 01:24:25.565687 3064 log.go:172] (0xc000997340) (0xc000d2e280) Stream added, broadcasting: 1\nI0507 01:24:25.570721 3064 log.go:172] (0xc000997340) Reply frame received for 1\nI0507 01:24:25.570777 3064 log.go:172] (0xc000997340) (0xc00065a1e0) Create stream\nI0507 01:24:25.570811 3064 log.go:172] (0xc000997340) (0xc00065a1e0) Stream added, broadcasting: 3\nI0507 01:24:25.571701 3064 log.go:172] (0xc000997340) Reply frame received for 3\nI0507 01:24:25.571727 3064 log.go:172] (0xc000997340) (0xc0005aed20) Create stream\nI0507 01:24:25.571735 3064 log.go:172] (0xc000997340) (0xc0005aed20) Stream added, broadcasting: 5\nI0507 01:24:25.572567 3064 log.go:172] (0xc000997340) Reply frame received for 5\nI0507 01:24:25.629589 3064 log.go:172] (0xc000997340) Data frame received for 5\nI0507 01:24:25.629622 3064 log.go:172] (0xc0005aed20) (5) Data frame handling\nI0507 01:24:25.629643 3064 log.go:172] (0xc0005aed20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 01:24:25.659604 3064 log.go:172] (0xc000997340) Data frame received for 3\nI0507 01:24:25.659649 3064 log.go:172] (0xc00065a1e0) (3) Data frame handling\nI0507 01:24:25.659688 3064 log.go:172] (0xc00065a1e0) (3) Data frame sent\nI0507 01:24:25.659709 3064 log.go:172] (0xc000997340) Data frame received for 3\nI0507 01:24:25.659964 3064 log.go:172] (0xc000997340) Data frame received for 5\nI0507 01:24:25.660017 3064 log.go:172] (0xc0005aed20) (5) Data frame handling\nI0507 01:24:25.660046 3064 log.go:172] (0xc00065a1e0) (3) Data frame handling\nI0507 01:24:25.662073 3064 log.go:172] (0xc000997340) Data frame received for 1\nI0507 01:24:25.662106 3064 log.go:172] (0xc000d2e280) (1) Data frame handling\nI0507 01:24:25.662140 3064 log.go:172] (0xc000d2e280) (1) Data frame sent\nI0507 01:24:25.662292 3064 log.go:172] (0xc000997340) (0xc000d2e280) Stream removed, broadcasting: 1\nI0507 01:24:25.662414 3064 log.go:172] (0xc000997340) Go away received\nI0507 01:24:25.662900 3064 log.go:172] (0xc000997340) (0xc000d2e280) Stream removed, broadcasting: 1\nI0507 01:24:25.662922 3064 log.go:172] (0xc000997340) (0xc00065a1e0) Stream removed, broadcasting: 3\nI0507 01:24:25.662934 3064 log.go:172] (0xc000997340) (0xc0005aed20) Stream removed, broadcasting: 5\n" May 7 01:24:25.669: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 01:24:25.669: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 7 01:24:25.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3099 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 7 01:24:25.931: INFO: stderr: "I0507 01:24:25.803655 3083 log.go:172] (0xc00003ae70) (0xc0005ad2c0) Create stream\nI0507 01:24:25.803738 3083 log.go:172] (0xc00003ae70) (0xc0005ad2c0) Stream added, broadcasting: 1\nI0507 01:24:25.806407 3083 log.go:172] (0xc00003ae70) Reply frame received for 1\nI0507 01:24:25.806447 3083 log.go:172] (0xc00003ae70) (0xc000380e60) Create stream\nI0507 01:24:25.806459 3083 log.go:172] (0xc00003ae70) (0xc000380e60) Stream added, broadcasting: 3\nI0507 01:24:25.807386 3083 log.go:172] (0xc00003ae70) Reply frame received for 3\nI0507 01:24:25.807430 3083 log.go:172] (0xc00003ae70) (0xc0006b0dc0) Create stream\nI0507 01:24:25.807441 3083 log.go:172] (0xc00003ae70) (0xc0006b0dc0) Stream added, broadcasting: 5\nI0507 01:24:25.808556 3083 log.go:172] (0xc00003ae70) Reply frame received for 5\nI0507 01:24:25.885354 3083 log.go:172] (0xc00003ae70) Data frame received for 5\nI0507 01:24:25.885382 3083 log.go:172] (0xc0006b0dc0) (5) Data frame handling\nI0507 01:24:25.885397 3083 log.go:172] (0xc0006b0dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0507 01:24:25.923183 3083 log.go:172] (0xc00003ae70) Data frame received for 5\nI0507 01:24:25.923240 3083 log.go:172] (0xc0006b0dc0) (5) Data frame handling\nI0507 01:24:25.923273 3083 log.go:172] (0xc00003ae70) Data frame received for 3\nI0507 01:24:25.923291 3083 log.go:172] (0xc000380e60) (3) Data frame handling\nI0507 01:24:25.923311 3083 log.go:172] (0xc000380e60) (3) Data frame sent\nI0507 01:24:25.923335 3083 log.go:172] (0xc00003ae70) Data frame received for 3\nI0507 01:24:25.923350 3083 log.go:172] (0xc000380e60) (3) Data frame handling\nI0507 01:24:25.925679 3083 log.go:172] (0xc00003ae70) Data frame received for 1\nI0507 01:24:25.925721 3083 log.go:172] (0xc0005ad2c0) (1) Data frame handling\nI0507 01:24:25.925745 3083 log.go:172] (0xc0005ad2c0) (1) Data frame sent\nI0507 01:24:25.925768 3083 log.go:172] (0xc00003ae70) (0xc0005ad2c0) Stream removed, broadcasting: 1\nI0507 01:24:25.925806 3083 log.go:172] (0xc00003ae70) Go away received\nI0507 01:24:25.926279 3083 log.go:172] (0xc00003ae70) (0xc0005ad2c0) Stream removed, broadcasting: 1\nI0507 01:24:25.926301 3083 log.go:172] (0xc00003ae70) (0xc000380e60) Stream removed, broadcasting: 3\nI0507 01:24:25.926312 3083 log.go:172] (0xc00003ae70) (0xc0006b0dc0) Stream removed, broadcasting: 5\n" May 7 01:24:25.931: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 7 01:24:25.931: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 7 01:24:25.931: INFO: Waiting for statefulset status.replicas updated to 0 May 7 01:24:25.935: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 7 01:24:35.944: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 7 01:24:35.944: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 7 01:24:35.944: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 7 01:24:35.954: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:35.955: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:35.955: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:35.955: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:35.955: INFO: May 7 01:24:35.955: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 01:24:36.960: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:36.960: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:36.960: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:36.960: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:36.960: INFO: May 7 01:24:36.960: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 01:24:37.965: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:37.965: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:37.965: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:37.965: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:37.965: INFO: May 7 01:24:37.965: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 01:24:38.971: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:38.971: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:38.971: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:38.971: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:38.971: INFO: May 7 01:24:38.971: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 01:24:39.977: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:39.977: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:39.977: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:39.977: INFO: May 7 01:24:39.977: INFO: StatefulSet ss has not reached scale 0, at 2 May 7 01:24:40.982: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:40.982: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:40.982: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:40.982: INFO: May 7 01:24:40.982: INFO: StatefulSet ss has not reached scale 0, at 2 May 7 01:24:41.988: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:41.988: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:41.988: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:41.988: INFO: May 7 01:24:41.988: INFO: StatefulSet ss has not reached scale 0, at 2 May 7 01:24:42.994: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:42.994: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:42.994: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:42.994: INFO: May 7 01:24:42.994: INFO: StatefulSet ss has not reached scale 0, at 2 May 7 01:24:44.001: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:44.001: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:44.001: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:13 +0000 UTC }] May 7 01:24:44.001: INFO: May 7 01:24:44.001: INFO: StatefulSet ss has not reached scale 0, at 2 May 7 01:24:45.005: INFO: POD NODE PHASE GRACE CONDITIONS May 7 01:24:45.005: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:24:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 01:23:53 +0000 UTC }] May 7 01:24:45.005: INFO: May 7 01:24:45.005: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3099 May 7 01:24:46.010: INFO: Scaling statefulset ss to 0 May 7 01:24:46.021: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 7 01:24:46.024: INFO: Deleting all statefulset in ns statefulset-3099 May 7 01:24:46.026: INFO: Scaling statefulset ss to 0 May 7 01:24:46.036: INFO: Waiting for statefulset status.replicas updated to 0 May 7 01:24:46.039: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:24:46.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3099" for this suite. • [SLOW TEST:53.000 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":254,"skipped":4105,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:24:46.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 01:24:50.286: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:24:50.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7369" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:24:50.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:24:54.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4775" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:24:54.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:24:54.693: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d482916b-c3a1-4853-b20e-14d05ad0d12b" in namespace "security-context-test-2238" to be "Succeeded or Failed" May 7 01:24:54.697: INFO: Pod "busybox-readonly-false-d482916b-c3a1-4853-b20e-14d05ad0d12b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20831ms May 7 01:24:56.710: INFO: Pod "busybox-readonly-false-d482916b-c3a1-4853-b20e-14d05ad0d12b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016887095s May 7 01:24:58.713: INFO: Pod "busybox-readonly-false-d482916b-c3a1-4853-b20e-14d05ad0d12b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019968321s May 7 01:24:58.713: INFO: Pod "busybox-readonly-false-d482916b-c3a1-4853-b20e-14d05ad0d12b" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:24:58.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2238" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4186,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:24:58.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 7 01:24:58.803: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix742154543/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:24:58.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2582" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":258,"skipped":4193,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:24:58.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7318.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7318.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7318.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7318.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7318.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7318.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 01:25:05.013: INFO: DNS probes using dns-7318/dns-test-a492a173-c611-4a0b-b3b9-dd7e28ee6751 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:25:05.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7318" for this suite. • [SLOW TEST:6.248 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":259,"skipped":4211,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:25:05.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:25:05.715: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:25:09.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1235" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":260,"skipped":4216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:25:09.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 7 01:25:09.874: INFO: Waiting up to 5m0s for pod "pod-568383c4-482a-4209-b57c-b52905373e6d" in namespace "emptydir-1168" to be "Succeeded or Failed" May 7 01:25:09.906: INFO: Pod "pod-568383c4-482a-4209-b57c-b52905373e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.791809ms May 7 01:25:11.910: INFO: Pod "pod-568383c4-482a-4209-b57c-b52905373e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036006947s May 7 01:25:13.914: INFO: Pod "pod-568383c4-482a-4209-b57c-b52905373e6d": Phase="Running", Reason="", readiness=true. Elapsed: 4.040165264s May 7 01:25:15.918: INFO: Pod "pod-568383c4-482a-4209-b57c-b52905373e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044387915s STEP: Saw pod success May 7 01:25:15.918: INFO: Pod "pod-568383c4-482a-4209-b57c-b52905373e6d" satisfied condition "Succeeded or Failed" May 7 01:25:15.922: INFO: Trying to get logs from node latest-worker2 pod pod-568383c4-482a-4209-b57c-b52905373e6d container test-container: STEP: delete the pod May 7 01:25:15.942: INFO: Waiting for pod pod-568383c4-482a-4209-b57c-b52905373e6d to disappear May 7 01:25:15.960: INFO: Pod pod-568383c4-482a-4209-b57c-b52905373e6d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:25:15.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1168" for this suite. • [SLOW TEST:6.185 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":261,"skipped":4319,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:25:15.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:25:16.063: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 7 01:25:19.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2021 create -f -' May 7 01:25:22.201: INFO: stderr: "" May 7 01:25:22.201: INFO: stdout: "e2e-test-crd-publish-openapi-9314-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 7 01:25:22.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2021 delete e2e-test-crd-publish-openapi-9314-crds test-cr' May 7 01:25:22.338: INFO: stderr: "" May 7 01:25:22.338: INFO: stdout: "e2e-test-crd-publish-openapi-9314-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 7 01:25:22.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2021 apply -f -' May 7 01:25:22.624: INFO: stderr: "" May 7 01:25:22.624: INFO: stdout: "e2e-test-crd-publish-openapi-9314-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 7 01:25:22.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2021 delete e2e-test-crd-publish-openapi-9314-crds test-cr' May 7 01:25:22.727: INFO: stderr: "" May 7 01:25:22.727: INFO: stdout: "e2e-test-crd-publish-openapi-9314-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 7 01:25:22.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9314-crds' May 7 01:25:22.973: INFO: stderr: "" May 7 01:25:22.973: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9314-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:25:24.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2021" for this suite. • [SLOW TEST:8.919 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":262,"skipped":4321,"failed":0} [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:25:24.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 7 01:25:29.470: INFO: Successfully updated pod "pod-update-19f4b077-fa9d-40cc-8bcb-e85fa5594d9f" STEP: verifying the updated pod is in kubernetes May 7 01:25:29.498: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:25:29.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1485" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":263,"skipped":4321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:25:29.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 7 01:25:37.754: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 01:25:37.768: INFO: Pod pod-with-prestop-exec-hook still exists May 7 01:25:39.768: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 01:25:39.773: INFO: Pod pod-with-prestop-exec-hook still exists May 7 01:25:41.768: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 01:25:41.772: INFO: Pod pod-with-prestop-exec-hook still exists May 7 01:25:43.768: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 01:25:43.773: INFO: Pod pod-with-prestop-exec-hook still exists May 7 01:25:45.768: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 01:25:45.772: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:25:45.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3539" for this suite. • [SLOW TEST:16.282 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":264,"skipped":4385,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:25:45.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0507 01:25:47.113500 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 01:25:47.113: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:25:47.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4952" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":265,"skipped":4387,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:25:47.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 7 01:25:47.287: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 7 01:25:47.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6944' May 7 01:25:47.635: INFO: stderr: "" May 7 01:25:47.635: INFO: stdout: "service/agnhost-slave created\n" May 7 01:25:47.636: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 7 01:25:47.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6944' May 7 01:25:48.006: INFO: stderr: "" May 7 01:25:48.006: INFO: stdout: "service/agnhost-master created\n" May 7 01:25:48.007: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 7 01:25:48.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6944' May 7 01:25:49.103: INFO: stderr: "" May 7 01:25:49.103: INFO: stdout: "service/frontend created\n" May 7 01:25:49.103: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 7 01:25:49.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6944' May 7 01:25:49.589: INFO: stderr: "" May 7 01:25:49.589: INFO: stdout: "deployment.apps/frontend created\n" May 7 01:25:49.589: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 7 01:25:49.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6944' May 7 01:25:49.905: INFO: stderr: "" May 7 01:25:49.905: INFO: stdout: "deployment.apps/agnhost-master created\n" May 7 01:25:49.905: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 7 01:25:49.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6944' May 7 01:25:50.185: INFO: stderr: "" May 7 01:25:50.185: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 7 01:25:50.185: INFO: Waiting for all frontend pods to be Running. May 7 01:26:00.236: INFO: Waiting for frontend to serve content. May 7 01:26:00.248: INFO: Trying to add a new entry to the guestbook. May 7 01:26:00.257: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 7 01:26:00.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6944' May 7 01:26:00.469: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 01:26:00.469: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 7 01:26:00.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6944' May 7 01:26:00.606: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 01:26:00.606: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 7 01:26:00.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6944' May 7 01:26:00.788: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 01:26:00.788: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 7 01:26:00.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6944' May 7 01:26:00.909: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 01:26:00.909: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 7 01:26:00.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6944' May 7 01:26:01.107: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 01:26:01.107: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 7 01:26:01.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6944' May 7 01:26:01.628: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 01:26:01.628: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:26:01.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6944" for this suite. • [SLOW TEST:14.848 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":266,"skipped":4389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:26:01.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 7 01:26:02.431: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 7 01:26:03.539: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 7 01:26:06.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411563, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411563, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411564, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411563, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 01:26:08.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411563, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411563, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411564, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724411563, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 01:26:11.330: INFO: Waited 621.464045ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:26:11.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7367" for this suite. • [SLOW TEST:10.038 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":267,"skipped":4412,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:26:12.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-e4790e3a-e229-44b6-aa9e-2f2f22a451cc in namespace container-probe-454 May 7 01:26:16.348: INFO: Started pod test-webserver-e4790e3a-e229-44b6-aa9e-2f2f22a451cc in namespace container-probe-454 STEP: checking the pod's current state and verifying that restartCount is present May 7 01:26:16.352: INFO: Initial restart count of pod test-webserver-e4790e3a-e229-44b6-aa9e-2f2f22a451cc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:30:16.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-454" for this suite. • [SLOW TEST:245.016 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:30:17.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 7 01:30:17.174: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2456" to be "Succeeded or Failed" May 7 01:30:17.429: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 254.98757ms May 7 01:30:19.433: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259571033s May 7 01:30:21.510: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336496069s May 7 01:30:23.514: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.340088973s STEP: Saw pod success May 7 01:30:23.514: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 7 01:30:23.516: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 7 01:30:23.721: INFO: Waiting for pod pod-host-path-test to disappear May 7 01:30:23.726: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:30:23.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2456" for this suite. • [SLOW TEST:6.722 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":269,"skipped":4453,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:30:23.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-3d24275f-3d1e-4c7c-9339-a0d4fb2b2663 STEP: Creating secret with name s-test-opt-upd-add67166-b8f7-452c-82f0-7f6e00525615 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3d24275f-3d1e-4c7c-9339-a0d4fb2b2663 STEP: Updating secret s-test-opt-upd-add67166-b8f7-452c-82f0-7f6e00525615 STEP: Creating secret with name s-test-opt-create-5891ff07-bcde-4747-8552-c2938feaa0d6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:31:36.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2618" for this suite. • [SLOW TEST:72.634 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":270,"skipped":4462,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:31:36.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1559 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1559;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1559 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1559;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1559.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1559.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1559.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1559.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1559.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1559.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1559.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1559.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1559.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1559.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1559.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.94.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.94.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.94.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.94.140_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1559 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1559;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1559 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1559;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1559.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1559.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1559.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1559.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1559.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1559.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1559.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1559.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1559.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1559.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1559.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1559.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.94.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.94.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.94.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.94.140_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 01:31:44.627: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.631: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.634: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.636: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.638: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.667: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.671: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.674: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.698: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.701: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.704: INFO: Unable to read jessie_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.707: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.710: INFO: Unable to read jessie_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.713: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.716: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.719: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:44.739: INFO: Lookups using dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1559 wheezy_tcp@dns-test-service.dns-1559 wheezy_udp@dns-test-service.dns-1559.svc wheezy_tcp@dns-test-service.dns-1559.svc wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1559 jessie_tcp@dns-test-service.dns-1559 jessie_udp@dns-test-service.dns-1559.svc jessie_tcp@dns-test-service.dns-1559.svc jessie_udp@_http._tcp.dns-test-service.dns-1559.svc jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc] May 7 01:31:49.745: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.754: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.759: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.762: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.765: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.767: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.770: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.772: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.791: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.795: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.797: INFO: Unable to read jessie_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.799: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.801: INFO: Unable to read jessie_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.804: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.806: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.808: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:49.821: INFO: Lookups using dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1559 wheezy_tcp@dns-test-service.dns-1559 wheezy_udp@dns-test-service.dns-1559.svc wheezy_tcp@dns-test-service.dns-1559.svc wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1559 jessie_tcp@dns-test-service.dns-1559 jessie_udp@dns-test-service.dns-1559.svc jessie_tcp@dns-test-service.dns-1559.svc jessie_udp@_http._tcp.dns-test-service.dns-1559.svc jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc] May 7 01:31:54.747: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.750: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.754: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.756: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.759: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.762: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.764: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.767: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.809: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.812: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.816: INFO: Unable to read jessie_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.819: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.823: INFO: Unable to read jessie_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.826: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.830: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.833: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:54.850: INFO: Lookups using dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1559 wheezy_tcp@dns-test-service.dns-1559 wheezy_udp@dns-test-service.dns-1559.svc wheezy_tcp@dns-test-service.dns-1559.svc wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1559 jessie_tcp@dns-test-service.dns-1559 jessie_udp@dns-test-service.dns-1559.svc jessie_tcp@dns-test-service.dns-1559.svc jessie_udp@_http._tcp.dns-test-service.dns-1559.svc jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc] May 7 01:31:59.744: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.749: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.753: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.756: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.760: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.763: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.766: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.769: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.790: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.793: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.796: INFO: Unable to read jessie_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.799: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.802: INFO: Unable to read jessie_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.805: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.809: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.812: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:31:59.828: INFO: Lookups using dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1559 wheezy_tcp@dns-test-service.dns-1559 wheezy_udp@dns-test-service.dns-1559.svc wheezy_tcp@dns-test-service.dns-1559.svc wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1559 jessie_tcp@dns-test-service.dns-1559 jessie_udp@dns-test-service.dns-1559.svc jessie_tcp@dns-test-service.dns-1559.svc jessie_udp@_http._tcp.dns-test-service.dns-1559.svc jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc] May 7 01:32:04.745: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.750: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.754: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.757: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.760: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.762: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.765: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.768: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.787: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.790: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.793: INFO: Unable to read jessie_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.796: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.799: INFO: Unable to read jessie_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.801: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.804: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.807: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:04.826: INFO: Lookups using dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1559 wheezy_tcp@dns-test-service.dns-1559 wheezy_udp@dns-test-service.dns-1559.svc wheezy_tcp@dns-test-service.dns-1559.svc wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1559 jessie_tcp@dns-test-service.dns-1559 jessie_udp@dns-test-service.dns-1559.svc jessie_tcp@dns-test-service.dns-1559.svc jessie_udp@_http._tcp.dns-test-service.dns-1559.svc jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc] May 7 01:32:09.744: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.749: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.753: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.756: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.760: INFO: Unable to read wheezy_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.763: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.767: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.770: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.793: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.797: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.800: INFO: Unable to read jessie_udp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.803: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559 from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.807: INFO: Unable to read jessie_udp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.810: INFO: Unable to read jessie_tcp@dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.814: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.817: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc from pod dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11: the server could not find the requested resource (get pods dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11) May 7 01:32:09.839: INFO: Lookups using dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1559 wheezy_tcp@dns-test-service.dns-1559 wheezy_udp@dns-test-service.dns-1559.svc wheezy_tcp@dns-test-service.dns-1559.svc wheezy_udp@_http._tcp.dns-test-service.dns-1559.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1559.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1559 jessie_tcp@dns-test-service.dns-1559 jessie_udp@dns-test-service.dns-1559.svc jessie_tcp@dns-test-service.dns-1559.svc jessie_udp@_http._tcp.dns-test-service.dns-1559.svc jessie_tcp@_http._tcp.dns-test-service.dns-1559.svc] May 7 01:32:14.845: INFO: DNS probes using dns-1559/dns-test-9dc52495-d084-4be1-acbd-eed67eef1b11 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:32:15.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1559" for this suite. • [SLOW TEST:39.272 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":271,"skipped":4475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:32:15.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 7 01:32:15.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6331' May 7 01:32:16.114: INFO: stderr: "" May 7 01:32:16.114: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 01:32:16.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6331' May 7 01:32:16.254: INFO: stderr: "" May 7 01:32:16.254: INFO: stdout: "update-demo-nautilus-xmx77 update-demo-nautilus-xtlvq " May 7 01:32:16.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmx77 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:16.353: INFO: stderr: "" May 7 01:32:16.353: INFO: stdout: "" May 7 01:32:16.353: INFO: update-demo-nautilus-xmx77 is created but not running May 7 01:32:21.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6331' May 7 01:32:21.472: INFO: stderr: "" May 7 01:32:21.472: INFO: stdout: "update-demo-nautilus-xmx77 update-demo-nautilus-xtlvq " May 7 01:32:21.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmx77 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:21.565: INFO: stderr: "" May 7 01:32:21.565: INFO: stdout: "true" May 7 01:32:21.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmx77 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:21.667: INFO: stderr: "" May 7 01:32:21.667: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 01:32:21.667: INFO: validating pod update-demo-nautilus-xmx77 May 7 01:32:21.671: INFO: got data: { "image": "nautilus.jpg" } May 7 01:32:21.671: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 01:32:21.671: INFO: update-demo-nautilus-xmx77 is verified up and running May 7 01:32:21.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtlvq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:21.757: INFO: stderr: "" May 7 01:32:21.757: INFO: stdout: "true" May 7 01:32:21.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtlvq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:21.850: INFO: stderr: "" May 7 01:32:21.850: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 01:32:21.850: INFO: validating pod update-demo-nautilus-xtlvq May 7 01:32:21.877: INFO: got data: { "image": "nautilus.jpg" } May 7 01:32:21.877: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 01:32:21.877: INFO: update-demo-nautilus-xtlvq is verified up and running STEP: scaling down the replication controller May 7 01:32:21.880: INFO: scanned /root for discovery docs: May 7 01:32:21.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6331' May 7 01:32:23.015: INFO: stderr: "" May 7 01:32:23.015: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 01:32:23.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6331' May 7 01:32:23.124: INFO: stderr: "" May 7 01:32:23.124: INFO: stdout: "update-demo-nautilus-xmx77 update-demo-nautilus-xtlvq " STEP: Replicas for name=update-demo: expected=1 actual=2 May 7 01:32:28.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6331' May 7 01:32:28.241: INFO: stderr: "" May 7 01:32:28.241: INFO: stdout: "update-demo-nautilus-xmx77 update-demo-nautilus-xtlvq " STEP: Replicas for name=update-demo: expected=1 actual=2 May 7 01:32:33.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6331' May 7 01:32:33.349: INFO: stderr: "" May 7 01:32:33.350: INFO: stdout: "update-demo-nautilus-xmx77 update-demo-nautilus-xtlvq " STEP: Replicas for name=update-demo: expected=1 actual=2 May 7 01:32:38.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6331' May 7 01:32:38.456: INFO: stderr: "" May 7 01:32:38.457: INFO: stdout: "update-demo-nautilus-xmx77 " May 7 01:32:38.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmx77 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:38.550: INFO: stderr: "" May 7 01:32:38.550: INFO: stdout: "true" May 7 01:32:38.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmx77 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:38.642: INFO: stderr: "" May 7 01:32:38.642: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 01:32:38.642: INFO: validating pod update-demo-nautilus-xmx77 May 7 01:32:38.646: INFO: got data: { "image": "nautilus.jpg" } May 7 01:32:38.646: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 01:32:38.646: INFO: update-demo-nautilus-xmx77 is verified up and running STEP: scaling up the replication controller May 7 01:32:38.649: INFO: scanned /root for discovery docs: May 7 01:32:38.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6331' May 7 01:32:39.779: INFO: stderr: "" May 7 01:32:39.779: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 01:32:39.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6331' May 7 01:32:39.891: INFO: stderr: "" May 7 01:32:39.891: INFO: stdout: "update-demo-nautilus-rcb6b update-demo-nautilus-xmx77 " May 7 01:32:39.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rcb6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:39.995: INFO: stderr: "" May 7 01:32:39.996: INFO: stdout: "" May 7 01:32:39.996: INFO: update-demo-nautilus-rcb6b is created but not running May 7 01:32:44.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6331' May 7 01:32:45.097: INFO: stderr: "" May 7 01:32:45.097: INFO: stdout: "update-demo-nautilus-rcb6b update-demo-nautilus-xmx77 " May 7 01:32:45.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rcb6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:45.206: INFO: stderr: "" May 7 01:32:45.207: INFO: stdout: "true" May 7 01:32:45.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rcb6b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:45.315: INFO: stderr: "" May 7 01:32:45.315: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 01:32:45.315: INFO: validating pod update-demo-nautilus-rcb6b May 7 01:32:45.319: INFO: got data: { "image": "nautilus.jpg" } May 7 01:32:45.319: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 01:32:45.319: INFO: update-demo-nautilus-rcb6b is verified up and running May 7 01:32:45.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmx77 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:45.433: INFO: stderr: "" May 7 01:32:45.434: INFO: stdout: "true" May 7 01:32:45.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmx77 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6331' May 7 01:32:45.534: INFO: stderr: "" May 7 01:32:45.534: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 01:32:45.534: INFO: validating pod update-demo-nautilus-xmx77 May 7 01:32:45.537: INFO: got data: { "image": "nautilus.jpg" } May 7 01:32:45.537: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 01:32:45.537: INFO: update-demo-nautilus-xmx77 is verified up and running STEP: using delete to clean up resources May 7 01:32:45.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6331' May 7 01:32:45.637: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 01:32:45.638: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 7 01:32:45.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6331' May 7 01:32:45.732: INFO: stderr: "No resources found in kubectl-6331 namespace.\n" May 7 01:32:45.732: INFO: stdout: "" May 7 01:32:45.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6331 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 01:32:45.825: INFO: stderr: "" May 7 01:32:45.825: INFO: stdout: "update-demo-nautilus-rcb6b\nupdate-demo-nautilus-xmx77\n" May 7 01:32:46.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6331' May 7 01:32:46.622: INFO: stderr: "No resources found in kubectl-6331 namespace.\n" May 7 01:32:46.622: INFO: stdout: "" May 7 01:32:46.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6331 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 01:32:46.718: INFO: stderr: "" May 7 01:32:46.718: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:32:46.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6331" for this suite. • [SLOW TEST:31.097 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":272,"skipped":4498,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:32:46.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 7 01:32:47.065: INFO: Waiting up to 5m0s for pod "pod-ec2e198e-cf4e-49a3-b436-d32d412f2840" in namespace "emptydir-7773" to be "Succeeded or Failed" May 7 01:32:47.082: INFO: Pod "pod-ec2e198e-cf4e-49a3-b436-d32d412f2840": Phase="Pending", Reason="", readiness=false. Elapsed: 17.33208ms May 7 01:32:49.086: INFO: Pod "pod-ec2e198e-cf4e-49a3-b436-d32d412f2840": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021377496s May 7 01:32:51.091: INFO: Pod "pod-ec2e198e-cf4e-49a3-b436-d32d412f2840": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026031284s STEP: Saw pod success May 7 01:32:51.091: INFO: Pod "pod-ec2e198e-cf4e-49a3-b436-d32d412f2840" satisfied condition "Succeeded or Failed" May 7 01:32:51.094: INFO: Trying to get logs from node latest-worker2 pod pod-ec2e198e-cf4e-49a3-b436-d32d412f2840 container test-container: STEP: delete the pod May 7 01:32:51.149: INFO: Waiting for pod pod-ec2e198e-cf4e-49a3-b436-d32d412f2840 to disappear May 7 01:32:51.154: INFO: Pod pod-ec2e198e-cf4e-49a3-b436-d32d412f2840 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:32:51.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7773" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4505,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:32:51.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:32:55.354: INFO: Waiting up to 5m0s for pod "client-envvars-4c24e8e8-7bb6-4a40-aab3-1a4cf36f8306" in namespace "pods-2917" to be "Succeeded or Failed" May 7 01:32:55.394: INFO: Pod "client-envvars-4c24e8e8-7bb6-4a40-aab3-1a4cf36f8306": Phase="Pending", Reason="", readiness=false. Elapsed: 39.816768ms May 7 01:32:57.507: INFO: Pod "client-envvars-4c24e8e8-7bb6-4a40-aab3-1a4cf36f8306": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152771079s May 7 01:32:59.511: INFO: Pod "client-envvars-4c24e8e8-7bb6-4a40-aab3-1a4cf36f8306": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156789548s STEP: Saw pod success May 7 01:32:59.511: INFO: Pod "client-envvars-4c24e8e8-7bb6-4a40-aab3-1a4cf36f8306" satisfied condition "Succeeded or Failed" May 7 01:32:59.514: INFO: Trying to get logs from node latest-worker2 pod client-envvars-4c24e8e8-7bb6-4a40-aab3-1a4cf36f8306 container env3cont: STEP: delete the pod May 7 01:32:59.576: INFO: Waiting for pod client-envvars-4c24e8e8-7bb6-4a40-aab3-1a4cf36f8306 to disappear May 7 01:32:59.586: INFO: Pod client-envvars-4c24e8e8-7bb6-4a40-aab3-1a4cf36f8306 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:32:59.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2917" for this suite. • [SLOW TEST:8.432 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":274,"skipped":4514,"failed":0} [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:32:59.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0507 01:33:00.719139 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 01:33:00.719: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:33:00.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3789" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":275,"skipped":4514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:33:00.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-45km STEP: Creating a pod to test atomic-volume-subpath May 7 01:33:00.960: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-45km" in namespace "subpath-7705" to be "Succeeded or Failed" May 7 01:33:00.982: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Pending", Reason="", readiness=false. Elapsed: 22.283926ms May 7 01:33:02.987: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027108038s May 7 01:33:05.195: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235351155s May 7 01:33:07.272: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 6.312366449s May 7 01:33:09.276: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 8.316188809s May 7 01:33:11.280: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 10.320791275s May 7 01:33:13.302: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 12.342272973s May 7 01:33:15.307: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 14.346980974s May 7 01:33:17.311: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 16.35158643s May 7 01:33:19.315: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 18.355508579s May 7 01:33:21.319: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 20.359284973s May 7 01:33:23.323: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 22.362932605s May 7 01:33:25.327: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Running", Reason="", readiness=true. Elapsed: 24.367336994s May 7 01:33:27.332: INFO: Pod "pod-subpath-test-downwardapi-45km": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.372258987s STEP: Saw pod success May 7 01:33:27.332: INFO: Pod "pod-subpath-test-downwardapi-45km" satisfied condition "Succeeded or Failed" May 7 01:33:27.335: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-45km container test-container-subpath-downwardapi-45km: STEP: delete the pod May 7 01:33:27.400: INFO: Waiting for pod pod-subpath-test-downwardapi-45km to disappear May 7 01:33:27.413: INFO: Pod pod-subpath-test-downwardapi-45km no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-45km May 7 01:33:27.413: INFO: Deleting pod "pod-subpath-test-downwardapi-45km" in namespace "subpath-7705" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:33:27.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7705" for this suite. • [SLOW TEST:26.692 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":276,"skipped":4539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:33:27.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:33:41.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4078" for this suite. • [SLOW TEST:14.182 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":277,"skipped":4581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:33:41.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-789a3a1c-4318-449b-9771-0d295f4f3dd4 STEP: Creating configMap with name cm-test-opt-upd-b4c587d9-49b3-4229-b965-eea0e44dda5f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-789a3a1c-4318-449b-9771-0d295f4f3dd4 STEP: Updating configmap cm-test-opt-upd-b4c587d9-49b3-4229-b965-eea0e44dda5f STEP: Creating configMap with name cm-test-opt-create-addbc299-7efd-423d-88ea-fa23761543b9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:33:51.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-300" for this suite. • [SLOW TEST:10.337 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4607,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:33:51.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 01:33:52.472: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 01:33:54.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724412032, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724412032, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724412032, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724412032, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 01:33:57.564: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 7 01:33:57.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:33:58.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4837" for this suite. STEP: Destroying namespace "webhook-4837-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.098 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":279,"skipped":4619,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:33:59.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 7 01:33:59.724: INFO: created pod pod-service-account-defaultsa May 7 01:33:59.725: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 7 01:33:59.747: INFO: created pod pod-service-account-mountsa May 7 01:33:59.747: INFO: pod pod-service-account-mountsa service account token volume mount: true May 7 01:33:59.756: INFO: created pod pod-service-account-nomountsa May 7 01:33:59.756: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 7 01:33:59.777: INFO: created pod pod-service-account-defaultsa-mountspec May 7 01:33:59.777: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 7 01:33:59.986: INFO: created pod pod-service-account-mountsa-mountspec May 7 01:33:59.986: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 7 01:34:00.071: INFO: created pod pod-service-account-nomountsa-mountspec May 7 01:34:00.071: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 7 01:34:00.139: INFO: created pod pod-service-account-defaultsa-nomountspec May 7 01:34:00.139: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 7 01:34:00.171: INFO: created pod pod-service-account-mountsa-nomountspec May 7 01:34:00.171: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 7 01:34:00.214: INFO: created pod pod-service-account-nomountsa-nomountspec May 7 01:34:00.214: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:34:00.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6649" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":280,"skipped":4621,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:34:00.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 7 01:34:15.314: INFO: Successfully updated pod "labelsupdate00a28198-b1ec-42a9-9c92-704c0606bf87" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:34:17.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7243" for this suite. • [SLOW TEST:16.973 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":281,"skipped":4635,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:34:17.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 7 01:34:17.544: INFO: Waiting up to 5m0s for pod "pod-eb8facf0-a7eb-4240-890c-814d187e5339" in namespace "emptydir-297" to be "Succeeded or Failed" May 7 01:34:17.656: INFO: Pod "pod-eb8facf0-a7eb-4240-890c-814d187e5339": Phase="Pending", Reason="", readiness=false. Elapsed: 112.054977ms May 7 01:34:19.660: INFO: Pod "pod-eb8facf0-a7eb-4240-890c-814d187e5339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116175609s May 7 01:34:21.665: INFO: Pod "pod-eb8facf0-a7eb-4240-890c-814d187e5339": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121086378s STEP: Saw pod success May 7 01:34:21.665: INFO: Pod "pod-eb8facf0-a7eb-4240-890c-814d187e5339" satisfied condition "Succeeded or Failed" May 7 01:34:21.668: INFO: Trying to get logs from node latest-worker pod pod-eb8facf0-a7eb-4240-890c-814d187e5339 container test-container: STEP: delete the pod May 7 01:34:21.705: INFO: Waiting for pod pod-eb8facf0-a7eb-4240-890c-814d187e5339 to disappear May 7 01:34:21.714: INFO: Pod pod-eb8facf0-a7eb-4240-890c-814d187e5339 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:34:21.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-297" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4642,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:34:21.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-fz9r STEP: Creating a pod to test atomic-volume-subpath May 7 01:34:21.781: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fz9r" in namespace "subpath-9847" to be "Succeeded or Failed" May 7 01:34:21.805: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Pending", Reason="", readiness=false. Elapsed: 23.537741ms May 7 01:34:23.809: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027554537s May 7 01:34:25.814: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032619097s May 7 01:34:27.820: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Running", Reason="", readiness=true. Elapsed: 6.038654603s May 7 01:34:29.824: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Running", Reason="", readiness=true. Elapsed: 8.042868354s May 7 01:34:31.829: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Running", Reason="", readiness=true. Elapsed: 10.047546552s May 7 01:34:33.834: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Running", Reason="", readiness=true. Elapsed: 12.052430388s May 7 01:34:35.839: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Running", Reason="", readiness=true. Elapsed: 14.057091118s May 7 01:34:37.842: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Running", Reason="", readiness=true. Elapsed: 16.06031056s May 7 01:34:39.845: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Running", Reason="", readiness=true. Elapsed: 18.063563931s May 7 01:34:41.849: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Running", Reason="", readiness=true. Elapsed: 20.067433898s May 7 01:34:43.852: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Running", Reason="", readiness=true. Elapsed: 22.070542291s May 7 01:34:45.887: INFO: Pod "pod-subpath-test-projected-fz9r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.105621821s STEP: Saw pod success May 7 01:34:45.887: INFO: Pod "pod-subpath-test-projected-fz9r" satisfied condition "Succeeded or Failed" May 7 01:34:45.890: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-fz9r container test-container-subpath-projected-fz9r: STEP: delete the pod May 7 01:34:45.945: INFO: Waiting for pod pod-subpath-test-projected-fz9r to disappear May 7 01:34:45.972: INFO: Pod pod-subpath-test-projected-fz9r no longer exists STEP: Deleting pod pod-subpath-test-projected-fz9r May 7 01:34:45.972: INFO: Deleting pod "pod-subpath-test-projected-fz9r" in namespace "subpath-9847" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:34:45.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9847" for this suite. • [SLOW TEST:24.262 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":283,"skipped":4647,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:34:45.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-57b85a9e-cadc-4a4a-b016-e529955f6820 STEP: Creating a pod to test consume configMaps May 7 01:34:46.308: INFO: Waiting up to 5m0s for pod "pod-configmaps-3bd027de-5e16-4893-af17-d004155c8016" in namespace "configmap-4558" to be "Succeeded or Failed" May 7 01:34:46.443: INFO: Pod "pod-configmaps-3bd027de-5e16-4893-af17-d004155c8016": Phase="Pending", Reason="", readiness=false. Elapsed: 135.208696ms May 7 01:34:48.447: INFO: Pod "pod-configmaps-3bd027de-5e16-4893-af17-d004155c8016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139594424s May 7 01:34:50.452: INFO: Pod "pod-configmaps-3bd027de-5e16-4893-af17-d004155c8016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143935014s STEP: Saw pod success May 7 01:34:50.452: INFO: Pod "pod-configmaps-3bd027de-5e16-4893-af17-d004155c8016" satisfied condition "Succeeded or Failed" May 7 01:34:50.454: INFO: Trying to get logs from node latest-worker pod pod-configmaps-3bd027de-5e16-4893-af17-d004155c8016 container configmap-volume-test: STEP: delete the pod May 7 01:34:50.498: INFO: Waiting for pod pod-configmaps-3bd027de-5e16-4893-af17-d004155c8016 to disappear May 7 01:34:50.506: INFO: Pod pod-configmaps-3bd027de-5e16-4893-af17-d004155c8016 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:34:50.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4558" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4655,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:34:50.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9822 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9822 STEP: Creating statefulset with conflicting port in namespace statefulset-9822 STEP: Waiting until pod test-pod will start running in namespace statefulset-9822 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9822 May 7 01:34:54.741: INFO: Observed stateful pod in namespace: statefulset-9822, name: ss-0, uid: 83195e2d-77ec-419f-9ceb-7943176a6ae7, status phase: Pending. Waiting for statefulset controller to delete. May 7 01:34:55.306: INFO: Observed stateful pod in namespace: statefulset-9822, name: ss-0, uid: 83195e2d-77ec-419f-9ceb-7943176a6ae7, status phase: Failed. Waiting for statefulset controller to delete. May 7 01:34:55.327: INFO: Observed stateful pod in namespace: statefulset-9822, name: ss-0, uid: 83195e2d-77ec-419f-9ceb-7943176a6ae7, status phase: Failed. Waiting for statefulset controller to delete. May 7 01:34:55.376: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9822 STEP: Removing pod with conflicting port in namespace statefulset-9822 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9822 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 7 01:35:01.447: INFO: Deleting all statefulset in ns statefulset-9822 May 7 01:35:01.451: INFO: Scaling statefulset ss to 0 May 7 01:35:21.470: INFO: Waiting for statefulset status.replicas updated to 0 May 7 01:35:21.472: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:35:21.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9822" for this suite. • [SLOW TEST:30.984 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":285,"skipped":4667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:35:21.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:35:37.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-889" for this suite. • [SLOW TEST:16.475 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":286,"skipped":4723,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:35:38.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 7 01:35:45.852: INFO: 10 pods remaining May 7 01:35:45.852: INFO: 0 pods has nil DeletionTimestamp May 7 01:35:45.852: INFO: May 7 01:35:47.507: INFO: 0 pods remaining May 7 01:35:47.507: INFO: 0 pods has nil DeletionTimestamp May 7 01:35:47.507: INFO: May 7 01:35:48.657: INFO: 0 pods remaining May 7 01:35:48.657: INFO: 0 pods has nil DeletionTimestamp May 7 01:35:48.657: INFO: STEP: Gathering metrics W0507 01:35:48.995541 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 01:35:48.995: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:35:48.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3395" for this suite. • [SLOW TEST:11.632 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":287,"skipped":4738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 01:35:49.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-83e22c19-a91d-4cdd-a413-ec749be3d5d3 STEP: Creating a pod to test consume secrets May 7 01:35:50.805: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4816a54c-230c-4f56-9d56-6ac4045ef1b4" in namespace "projected-1821" to be "Succeeded or Failed" May 7 01:35:50.808: INFO: Pod "pod-projected-secrets-4816a54c-230c-4f56-9d56-6ac4045ef1b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.249063ms May 7 01:35:52.838: INFO: Pod "pod-projected-secrets-4816a54c-230c-4f56-9d56-6ac4045ef1b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033550022s May 7 01:35:54.901: INFO: Pod "pod-projected-secrets-4816a54c-230c-4f56-9d56-6ac4045ef1b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096414466s STEP: Saw pod success May 7 01:35:54.901: INFO: Pod "pod-projected-secrets-4816a54c-230c-4f56-9d56-6ac4045ef1b4" satisfied condition "Succeeded or Failed" May 7 01:35:54.915: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-4816a54c-230c-4f56-9d56-6ac4045ef1b4 container projected-secret-volume-test: STEP: delete the pod May 7 01:35:55.036: INFO: Waiting for pod pod-projected-secrets-4816a54c-230c-4f56-9d56-6ac4045ef1b4 to disappear May 7 01:35:55.066: INFO: Pod pod-projected-secrets-4816a54c-230c-4f56-9d56-6ac4045ef1b4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 01:35:55.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1821" for this suite. • [SLOW TEST:5.457 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSMay 7 01:35:55.091: INFO: Running AfterSuite actions on all nodes May 7 01:35:55.091: INFO: Running AfterSuite actions on node 1 May 7 01:35:55.091: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5879.039 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS