I0318 21:06:19.802492 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0318 21:06:19.802743 6 e2e.go:109] Starting e2e run "89b75577-6e5a-4c8f-87a0-f4404043876d" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584565578 - Will randomize all specs Will run 278 of 4843 specs Mar 18 21:06:19.855: INFO: >>> kubeConfig: /root/.kube/config Mar 18 21:06:19.860: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 18 21:06:19.894: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 18 21:06:19.930: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 18 21:06:19.930: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 18 21:06:19.930: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 18 21:06:19.941: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 18 21:06:19.941: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 18 21:06:19.941: INFO: e2e test version: v1.17.3 Mar 18 21:06:19.943: INFO: kube-apiserver version: v1.17.2 Mar 18 21:06:19.943: INFO: >>> kubeConfig: /root/.kube/config Mar 18 21:06:19.949: INFO: Cluster IP family: ipv4 [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:06:19.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Mar 18 21:06:20.050: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-e49f4c7d-1bb1-453e-a5c8-7ede4bc9086e STEP: Creating a pod to test consume configMaps Mar 18 21:06:20.063: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea20d616-27b8-4ed8-9050-bd6131465469" in namespace "projected-6763" to be "success or failure" Mar 18 21:06:20.083: INFO: Pod "pod-projected-configmaps-ea20d616-27b8-4ed8-9050-bd6131465469": Phase="Pending", Reason="", readiness=false. Elapsed: 20.033552ms Mar 18 21:06:22.087: INFO: Pod "pod-projected-configmaps-ea20d616-27b8-4ed8-9050-bd6131465469": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024052559s Mar 18 21:06:24.090: INFO: Pod "pod-projected-configmaps-ea20d616-27b8-4ed8-9050-bd6131465469": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027754253s STEP: Saw pod success Mar 18 21:06:24.090: INFO: Pod "pod-projected-configmaps-ea20d616-27b8-4ed8-9050-bd6131465469" satisfied condition "success or failure" Mar 18 21:06:24.093: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-ea20d616-27b8-4ed8-9050-bd6131465469 container projected-configmap-volume-test: STEP: delete the pod Mar 18 21:06:24.131: INFO: Waiting for pod pod-projected-configmaps-ea20d616-27b8-4ed8-9050-bd6131465469 to disappear Mar 18 21:06:24.145: INFO: Pod pod-projected-configmaps-ea20d616-27b8-4ed8-9050-bd6131465469 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:06:24.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6763" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:06:24.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:06:24.191: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9576 I0318 21:06:24.211867 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9576, replica count: 1 I0318 21:06:25.262383 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 21:06:26.262597 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 21:06:27.262852 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 18 21:06:27.398: INFO: Created: latency-svc-c7d45 Mar 18 21:06:27.435: INFO: Got endpoints: latency-svc-c7d45 [72.160736ms] Mar 18 21:06:27.494: INFO: Created: latency-svc-r4qbt Mar 18 21:06:27.529: INFO: Got endpoints: latency-svc-r4qbt [94.285256ms] Mar 18 21:06:27.543: INFO: Created: latency-svc-4j55v Mar 18 21:06:27.558: INFO: Got endpoints: latency-svc-4j55v [123.373124ms] Mar 18 21:06:27.578: INFO: Created: latency-svc-n6rjz Mar 18 21:06:27.589: INFO: Got endpoints: latency-svc-n6rjz [153.977859ms] Mar 18 21:06:27.608: INFO: Created: latency-svc-nxbjt Mar 18 21:06:27.625: INFO: Got endpoints: latency-svc-nxbjt [189.864374ms] Mar 18 21:06:27.667: INFO: Created: latency-svc-sj784 Mar 18 21:06:27.692: INFO: Created: latency-svc-pltwp Mar 18 21:06:27.692: INFO: Got endpoints: latency-svc-sj784 [256.665442ms] Mar 18 21:06:27.715: INFO: Got endpoints: latency-svc-pltwp [279.948293ms] Mar 18 21:06:27.746: INFO: Created: latency-svc-f6klt Mar 18 21:06:27.758: INFO: Got endpoints: latency-svc-f6klt [322.376514ms] Mar 18 21:06:27.793: INFO: Created: latency-svc-cqlf7 Mar 18 21:06:27.796: INFO: Got endpoints: latency-svc-cqlf7 [360.717937ms] Mar 18 21:06:27.818: INFO: Created: latency-svc-gqnx7 Mar 18 21:06:27.842: INFO: Got endpoints: latency-svc-gqnx7 [406.620293ms] Mar 18 21:06:27.865: INFO: Created: latency-svc-xrp78 Mar 18 21:06:27.918: INFO: Got endpoints: latency-svc-xrp78 [483.151618ms] Mar 18 21:06:27.925: INFO: Created: latency-svc-m7kjb Mar 18 21:06:27.938: INFO: Got endpoints: latency-svc-m7kjb [503.451588ms] Mar 18 21:06:27.956: INFO: Created: latency-svc-9xz4r Mar 18 21:06:27.969: INFO: Got endpoints: latency-svc-9xz4r [533.470111ms] Mar 18 21:06:27.986: INFO: Created: latency-svc-rdkxv Mar 18 21:06:27.998: INFO: Got endpoints: latency-svc-rdkxv [563.138271ms] Mar 18 21:06:28.015: INFO: Created: latency-svc-rqnwn Mar 18 21:06:28.051: INFO: Got endpoints: latency-svc-rqnwn [615.594346ms] Mar 18 21:06:28.063: INFO: Created: latency-svc-kngd4 Mar 18 21:06:28.077: INFO: Got endpoints: latency-svc-kngd4 [641.672404ms] Mar 18 21:06:28.094: INFO: Created: latency-svc-v66fl Mar 18 21:06:28.117: INFO: Got endpoints: latency-svc-v66fl [588.157883ms] Mar 18 21:06:28.142: INFO: Created: latency-svc-wm85n Mar 18 21:06:28.188: INFO: Got endpoints: latency-svc-wm85n [630.108076ms] Mar 18 21:06:28.195: INFO: Created: latency-svc-z7wl7 Mar 18 21:06:28.209: INFO: Got endpoints: latency-svc-z7wl7 [620.2883ms] Mar 18 21:06:28.232: INFO: Created: latency-svc-7nhl2 Mar 18 21:06:28.246: INFO: Got endpoints: latency-svc-7nhl2 [620.589673ms] Mar 18 21:06:28.267: INFO: Created: latency-svc-xq92t Mar 18 21:06:28.282: INFO: Got endpoints: latency-svc-xq92t [589.49407ms] Mar 18 21:06:28.326: INFO: Created: latency-svc-xf96x Mar 18 21:06:28.329: INFO: Got endpoints: latency-svc-xf96x [613.79453ms] Mar 18 21:06:28.358: INFO: Created: latency-svc-6z4wg Mar 18 21:06:28.366: INFO: Got endpoints: latency-svc-6z4wg [608.605487ms] Mar 18 21:06:28.387: INFO: Created: latency-svc-cj9q8 Mar 18 21:06:28.403: INFO: Got endpoints: latency-svc-cj9q8 [607.021285ms] Mar 18 21:06:28.424: INFO: Created: latency-svc-sjcfb Mar 18 21:06:28.457: INFO: Got endpoints: latency-svc-sjcfb [615.698241ms] Mar 18 21:06:28.471: INFO: Created: latency-svc-j6c7q Mar 18 21:06:28.487: INFO: Got endpoints: latency-svc-j6c7q [568.651957ms] Mar 18 21:06:28.520: INFO: Created: latency-svc-dz5sc Mar 18 21:06:28.543: INFO: Got endpoints: latency-svc-dz5sc [604.943277ms] Mar 18 21:06:28.602: INFO: Created: latency-svc-rvlsc Mar 18 21:06:28.633: INFO: Got endpoints: latency-svc-rvlsc [664.524471ms] Mar 18 21:06:28.682: INFO: Created: latency-svc-kw8bz Mar 18 21:06:28.698: INFO: Got endpoints: latency-svc-kw8bz [699.611572ms] Mar 18 21:06:28.746: INFO: Created: latency-svc-hxpmc Mar 18 21:06:28.752: INFO: Got endpoints: latency-svc-hxpmc [701.025788ms] Mar 18 21:06:28.783: INFO: Created: latency-svc-bpwb7 Mar 18 21:06:28.819: INFO: Got endpoints: latency-svc-bpwb7 [741.805299ms] Mar 18 21:06:28.838: INFO: Created: latency-svc-bqfmt Mar 18 21:06:28.895: INFO: Got endpoints: latency-svc-bqfmt [777.044897ms] Mar 18 21:06:28.915: INFO: Created: latency-svc-wxtqk Mar 18 21:06:28.933: INFO: Got endpoints: latency-svc-wxtqk [744.165489ms] Mar 18 21:06:28.951: INFO: Created: latency-svc-p8xcr Mar 18 21:06:28.972: INFO: Got endpoints: latency-svc-p8xcr [762.813516ms] Mar 18 21:06:28.988: INFO: Created: latency-svc-lgfv7 Mar 18 21:06:29.056: INFO: Got endpoints: latency-svc-lgfv7 [810.312443ms] Mar 18 21:06:29.060: INFO: Created: latency-svc-mt66b Mar 18 21:06:29.066: INFO: Got endpoints: latency-svc-mt66b [784.264639ms] Mar 18 21:06:29.083: INFO: Created: latency-svc-4cc4f Mar 18 21:06:29.098: INFO: Got endpoints: latency-svc-4cc4f [769.08189ms] Mar 18 21:06:29.113: INFO: Created: latency-svc-tp8q5 Mar 18 21:06:29.127: INFO: Got endpoints: latency-svc-tp8q5 [760.31661ms] Mar 18 21:06:29.143: INFO: Created: latency-svc-t577j Mar 18 21:06:29.242: INFO: Got endpoints: latency-svc-t577j [838.936871ms] Mar 18 21:06:29.249: INFO: Created: latency-svc-22rq9 Mar 18 21:06:29.252: INFO: Got endpoints: latency-svc-22rq9 [794.747474ms] Mar 18 21:06:29.311: INFO: Created: latency-svc-6tfnr Mar 18 21:06:29.331: INFO: Got endpoints: latency-svc-6tfnr [844.202282ms] Mar 18 21:06:29.401: INFO: Created: latency-svc-vjpfr Mar 18 21:06:29.433: INFO: Got endpoints: latency-svc-vjpfr [890.014129ms] Mar 18 21:06:29.473: INFO: Created: latency-svc-788qz Mar 18 21:06:29.547: INFO: Got endpoints: latency-svc-788qz [913.859417ms] Mar 18 21:06:29.581: INFO: Created: latency-svc-qq6mn Mar 18 21:06:29.598: INFO: Got endpoints: latency-svc-qq6mn [900.055627ms] Mar 18 21:06:29.629: INFO: Created: latency-svc-v5q4d Mar 18 21:06:29.685: INFO: Got endpoints: latency-svc-v5q4d [932.925346ms] Mar 18 21:06:29.719: INFO: Created: latency-svc-ttgf7 Mar 18 21:06:29.734: INFO: Got endpoints: latency-svc-ttgf7 [915.155739ms] Mar 18 21:06:29.755: INFO: Created: latency-svc-lz662 Mar 18 21:06:29.764: INFO: Got endpoints: latency-svc-lz662 [869.333393ms] Mar 18 21:06:29.829: INFO: Created: latency-svc-gjdmm Mar 18 21:06:29.835: INFO: Got endpoints: latency-svc-gjdmm [901.931027ms] Mar 18 21:06:29.863: INFO: Created: latency-svc-p5njx Mar 18 21:06:29.879: INFO: Got endpoints: latency-svc-p5njx [906.277209ms] Mar 18 21:06:29.899: INFO: Created: latency-svc-m5tph Mar 18 21:06:29.915: INFO: Got endpoints: latency-svc-m5tph [859.111149ms] Mar 18 21:06:29.984: INFO: Created: latency-svc-m8v95 Mar 18 21:06:29.988: INFO: Got endpoints: latency-svc-m8v95 [921.441992ms] Mar 18 21:06:30.013: INFO: Created: latency-svc-7hzx9 Mar 18 21:06:30.029: INFO: Got endpoints: latency-svc-7hzx9 [931.022712ms] Mar 18 21:06:30.050: INFO: Created: latency-svc-ws9rz Mar 18 21:06:30.068: INFO: Got endpoints: latency-svc-ws9rz [941.773578ms] Mar 18 21:06:30.128: INFO: Created: latency-svc-dxdjm Mar 18 21:06:30.163: INFO: Got endpoints: latency-svc-dxdjm [920.655701ms] Mar 18 21:06:30.193: INFO: Created: latency-svc-vgnm9 Mar 18 21:06:30.210: INFO: Got endpoints: latency-svc-vgnm9 [957.575098ms] Mar 18 21:06:30.271: INFO: Created: latency-svc-sth5l Mar 18 21:06:30.288: INFO: Got endpoints: latency-svc-sth5l [956.855186ms] Mar 18 21:06:30.313: INFO: Created: latency-svc-cxgb7 Mar 18 21:06:30.324: INFO: Got endpoints: latency-svc-cxgb7 [890.869315ms] Mar 18 21:06:30.349: INFO: Created: latency-svc-rmpkl Mar 18 21:06:30.360: INFO: Got endpoints: latency-svc-rmpkl [813.086691ms] Mar 18 21:06:30.404: INFO: Created: latency-svc-s2hmn Mar 18 21:06:30.409: INFO: Got endpoints: latency-svc-s2hmn [811.242714ms] Mar 18 21:06:30.427: INFO: Created: latency-svc-bsp5z Mar 18 21:06:30.439: INFO: Got endpoints: latency-svc-bsp5z [753.862235ms] Mar 18 21:06:30.456: INFO: Created: latency-svc-mmz64 Mar 18 21:06:30.470: INFO: Got endpoints: latency-svc-mmz64 [735.732213ms] Mar 18 21:06:30.493: INFO: Created: latency-svc-gn5gn Mar 18 21:06:30.559: INFO: Got endpoints: latency-svc-gn5gn [795.326135ms] Mar 18 21:06:30.561: INFO: Created: latency-svc-bfc7z Mar 18 21:06:30.573: INFO: Got endpoints: latency-svc-bfc7z [738.212524ms] Mar 18 21:06:30.601: INFO: Created: latency-svc-wddpc Mar 18 21:06:30.615: INFO: Got endpoints: latency-svc-wddpc [736.379192ms] Mar 18 21:06:30.691: INFO: Created: latency-svc-7g7pt Mar 18 21:06:30.699: INFO: Got endpoints: latency-svc-7g7pt [783.940152ms] Mar 18 21:06:30.721: INFO: Created: latency-svc-mb64p Mar 18 21:06:30.736: INFO: Got endpoints: latency-svc-mb64p [748.351902ms] Mar 18 21:06:30.757: INFO: Created: latency-svc-6nqvx Mar 18 21:06:30.772: INFO: Got endpoints: latency-svc-6nqvx [742.769784ms] Mar 18 21:06:30.829: INFO: Created: latency-svc-dxvgs Mar 18 21:06:30.832: INFO: Got endpoints: latency-svc-dxvgs [763.65222ms] Mar 18 21:06:30.871: INFO: Created: latency-svc-9ftsx Mar 18 21:06:30.900: INFO: Got endpoints: latency-svc-9ftsx [737.633992ms] Mar 18 21:06:30.992: INFO: Created: latency-svc-vbf6x Mar 18 21:06:30.995: INFO: Got endpoints: latency-svc-vbf6x [785.234518ms] Mar 18 21:06:31.027: INFO: Created: latency-svc-4hnz2 Mar 18 21:06:31.044: INFO: Got endpoints: latency-svc-4hnz2 [755.950761ms] Mar 18 21:06:31.062: INFO: Created: latency-svc-rmslb Mar 18 21:06:31.079: INFO: Got endpoints: latency-svc-rmslb [754.975988ms] Mar 18 21:06:31.122: INFO: Created: latency-svc-q7d8n Mar 18 21:06:31.126: INFO: Got endpoints: latency-svc-q7d8n [765.471306ms] Mar 18 21:06:31.153: INFO: Created: latency-svc-jnp7b Mar 18 21:06:31.176: INFO: Got endpoints: latency-svc-jnp7b [766.786337ms] Mar 18 21:06:31.207: INFO: Created: latency-svc-ljz7d Mar 18 21:06:31.278: INFO: Got endpoints: latency-svc-ljz7d [151.627397ms] Mar 18 21:06:31.282: INFO: Created: latency-svc-vpqd4 Mar 18 21:06:31.290: INFO: Got endpoints: latency-svc-vpqd4 [851.407032ms] Mar 18 21:06:31.351: INFO: Created: latency-svc-qmp4b Mar 18 21:06:31.363: INFO: Got endpoints: latency-svc-qmp4b [892.998453ms] Mar 18 21:06:31.445: INFO: Created: latency-svc-jpbs8 Mar 18 21:06:31.448: INFO: Got endpoints: latency-svc-jpbs8 [888.785837ms] Mar 18 21:06:31.488: INFO: Created: latency-svc-2vg7j Mar 18 21:06:31.501: INFO: Got endpoints: latency-svc-2vg7j [927.65114ms] Mar 18 21:06:31.531: INFO: Created: latency-svc-6dd49 Mar 18 21:06:31.543: INFO: Got endpoints: latency-svc-6dd49 [927.992675ms] Mar 18 21:06:31.595: INFO: Created: latency-svc-tzszc Mar 18 21:06:31.620: INFO: Got endpoints: latency-svc-tzszc [921.182041ms] Mar 18 21:06:31.621: INFO: Created: latency-svc-gxr7k Mar 18 21:06:31.657: INFO: Got endpoints: latency-svc-gxr7k [920.875549ms] Mar 18 21:06:31.686: INFO: Created: latency-svc-6c7dk Mar 18 21:06:31.720: INFO: Got endpoints: latency-svc-6c7dk [948.028689ms] Mar 18 21:06:31.765: INFO: Created: latency-svc-96qbf Mar 18 21:06:31.778: INFO: Got endpoints: latency-svc-96qbf [946.35208ms] Mar 18 21:06:31.859: INFO: Created: latency-svc-gr7bj Mar 18 21:06:31.861: INFO: Got endpoints: latency-svc-gr7bj [960.674802ms] Mar 18 21:06:31.890: INFO: Created: latency-svc-wdflb Mar 18 21:06:31.906: INFO: Got endpoints: latency-svc-wdflb [911.256014ms] Mar 18 21:06:31.927: INFO: Created: latency-svc-ws87r Mar 18 21:06:31.941: INFO: Got endpoints: latency-svc-ws87r [896.938311ms] Mar 18 21:06:31.997: INFO: Created: latency-svc-x5g7k Mar 18 21:06:32.001: INFO: Got endpoints: latency-svc-x5g7k [921.950291ms] Mar 18 21:06:32.028: INFO: Created: latency-svc-4fh8b Mar 18 21:06:32.044: INFO: Got endpoints: latency-svc-4fh8b [867.730505ms] Mar 18 21:06:32.071: INFO: Created: latency-svc-q4nnv Mar 18 21:06:32.080: INFO: Got endpoints: latency-svc-q4nnv [802.454048ms] Mar 18 21:06:32.134: INFO: Created: latency-svc-hd8xb Mar 18 21:06:32.139: INFO: Got endpoints: latency-svc-hd8xb [848.63479ms] Mar 18 21:06:32.166: INFO: Created: latency-svc-rdqjq Mar 18 21:06:32.183: INFO: Got endpoints: latency-svc-rdqjq [819.859727ms] Mar 18 21:06:32.210: INFO: Created: latency-svc-vq6bk Mar 18 21:06:32.225: INFO: Got endpoints: latency-svc-vq6bk [776.716726ms] Mar 18 21:06:32.272: INFO: Created: latency-svc-k6cp9 Mar 18 21:06:32.279: INFO: Got endpoints: latency-svc-k6cp9 [778.337943ms] Mar 18 21:06:32.304: INFO: Created: latency-svc-v8qbp Mar 18 21:06:32.316: INFO: Got endpoints: latency-svc-v8qbp [772.266267ms] Mar 18 21:06:32.340: INFO: Created: latency-svc-jbf8s Mar 18 21:06:32.358: INFO: Got endpoints: latency-svc-jbf8s [737.224384ms] Mar 18 21:06:32.422: INFO: Created: latency-svc-ksvc5 Mar 18 21:06:32.426: INFO: Got endpoints: latency-svc-ksvc5 [768.817067ms] Mar 18 21:06:32.455: INFO: Created: latency-svc-frv26 Mar 18 21:06:32.484: INFO: Got endpoints: latency-svc-frv26 [764.233111ms] Mar 18 21:06:32.571: INFO: Created: latency-svc-nm5lc Mar 18 21:06:32.576: INFO: Got endpoints: latency-svc-nm5lc [797.367237ms] Mar 18 21:06:32.610: INFO: Created: latency-svc-mrdqm Mar 18 21:06:32.623: INFO: Got endpoints: latency-svc-mrdqm [761.93808ms] Mar 18 21:06:32.640: INFO: Created: latency-svc-8fcqj Mar 18 21:06:32.653: INFO: Got endpoints: latency-svc-8fcqj [747.012007ms] Mar 18 21:06:32.715: INFO: Created: latency-svc-zjhjn Mar 18 21:06:32.720: INFO: Got endpoints: latency-svc-zjhjn [778.950284ms] Mar 18 21:06:32.742: INFO: Created: latency-svc-tp5ss Mar 18 21:06:32.756: INFO: Got endpoints: latency-svc-tp5ss [754.226026ms] Mar 18 21:06:32.778: INFO: Created: latency-svc-pfzj2 Mar 18 21:06:32.799: INFO: Got endpoints: latency-svc-pfzj2 [754.463705ms] Mar 18 21:06:32.871: INFO: Created: latency-svc-kf7ds Mar 18 21:06:32.875: INFO: Got endpoints: latency-svc-kf7ds [794.927647ms] Mar 18 21:06:32.898: INFO: Created: latency-svc-lmx55 Mar 18 21:06:32.913: INFO: Got endpoints: latency-svc-lmx55 [773.63825ms] Mar 18 21:06:32.935: INFO: Created: latency-svc-892gw Mar 18 21:06:32.949: INFO: Got endpoints: latency-svc-892gw [766.050064ms] Mar 18 21:06:32.971: INFO: Created: latency-svc-7s46k Mar 18 21:06:33.032: INFO: Got endpoints: latency-svc-7s46k [807.228699ms] Mar 18 21:06:33.054: INFO: Created: latency-svc-xcqqv Mar 18 21:06:33.065: INFO: Got endpoints: latency-svc-xcqqv [785.332133ms] Mar 18 21:06:33.084: INFO: Created: latency-svc-fxgf9 Mar 18 21:06:33.094: INFO: Got endpoints: latency-svc-fxgf9 [778.496951ms] Mar 18 21:06:33.117: INFO: Created: latency-svc-7k24p Mar 18 21:06:33.176: INFO: Got endpoints: latency-svc-7k24p [817.991654ms] Mar 18 21:06:33.198: INFO: Created: latency-svc-ljh7t Mar 18 21:06:33.214: INFO: Got endpoints: latency-svc-ljh7t [788.55875ms] Mar 18 21:06:33.233: INFO: Created: latency-svc-9p7z8 Mar 18 21:06:33.251: INFO: Got endpoints: latency-svc-9p7z8 [766.297145ms] Mar 18 21:06:33.276: INFO: Created: latency-svc-675bx Mar 18 21:06:33.338: INFO: Got endpoints: latency-svc-675bx [761.740401ms] Mar 18 21:06:33.339: INFO: Created: latency-svc-2z8ck Mar 18 21:06:33.357: INFO: Got endpoints: latency-svc-2z8ck [733.669137ms] Mar 18 21:06:33.390: INFO: Created: latency-svc-lhbts Mar 18 21:06:33.405: INFO: Got endpoints: latency-svc-lhbts [751.35602ms] Mar 18 21:06:33.524: INFO: Created: latency-svc-hg7ln Mar 18 21:06:33.527: INFO: Got endpoints: latency-svc-hg7ln [807.053409ms] Mar 18 21:06:33.571: INFO: Created: latency-svc-7q9b4 Mar 18 21:06:33.594: INFO: Got endpoints: latency-svc-7q9b4 [838.238375ms] Mar 18 21:06:33.618: INFO: Created: latency-svc-vjn8t Mar 18 21:06:33.714: INFO: Got endpoints: latency-svc-vjn8t [915.783344ms] Mar 18 21:06:33.716: INFO: Created: latency-svc-q6hbz Mar 18 21:06:33.723: INFO: Got endpoints: latency-svc-q6hbz [847.998903ms] Mar 18 21:06:33.744: INFO: Created: latency-svc-9wt77 Mar 18 21:06:33.760: INFO: Got endpoints: latency-svc-9wt77 [847.348225ms] Mar 18 21:06:33.835: INFO: Created: latency-svc-97c68 Mar 18 21:06:33.839: INFO: Got endpoints: latency-svc-97c68 [889.683672ms] Mar 18 21:06:33.864: INFO: Created: latency-svc-f7879 Mar 18 21:06:33.880: INFO: Got endpoints: latency-svc-f7879 [847.679774ms] Mar 18 21:06:33.900: INFO: Created: latency-svc-5mhgt Mar 18 21:06:33.918: INFO: Got endpoints: latency-svc-5mhgt [853.26707ms] Mar 18 21:06:33.984: INFO: Created: latency-svc-dtc89 Mar 18 21:06:33.987: INFO: Got endpoints: latency-svc-dtc89 [893.095822ms] Mar 18 21:06:34.020: INFO: Created: latency-svc-8rbxz Mar 18 21:06:34.037: INFO: Got endpoints: latency-svc-8rbxz [861.313147ms] Mar 18 21:06:34.056: INFO: Created: latency-svc-vlgk8 Mar 18 21:06:34.073: INFO: Got endpoints: latency-svc-vlgk8 [858.766307ms] Mar 18 21:06:34.128: INFO: Created: latency-svc-sjlqh Mar 18 21:06:34.133: INFO: Got endpoints: latency-svc-sjlqh [882.619211ms] Mar 18 21:06:34.158: INFO: Created: latency-svc-qt6vm Mar 18 21:06:34.170: INFO: Got endpoints: latency-svc-qt6vm [832.218506ms] Mar 18 21:06:34.194: INFO: Created: latency-svc-2x7qg Mar 18 21:06:34.206: INFO: Got endpoints: latency-svc-2x7qg [849.26071ms] Mar 18 21:06:34.290: INFO: Created: latency-svc-gr7rl Mar 18 21:06:34.293: INFO: Got endpoints: latency-svc-gr7rl [887.593484ms] Mar 18 21:06:34.320: INFO: Created: latency-svc-nt574 Mar 18 21:06:34.333: INFO: Got endpoints: latency-svc-nt574 [805.018244ms] Mar 18 21:06:34.349: INFO: Created: latency-svc-g2vz7 Mar 18 21:06:34.363: INFO: Got endpoints: latency-svc-g2vz7 [769.030828ms] Mar 18 21:06:34.380: INFO: Created: latency-svc-56qx7 Mar 18 21:06:34.422: INFO: Got endpoints: latency-svc-56qx7 [707.045324ms] Mar 18 21:06:34.428: INFO: Created: latency-svc-9mpdr Mar 18 21:06:34.442: INFO: Got endpoints: latency-svc-9mpdr [718.741929ms] Mar 18 21:06:34.464: INFO: Created: latency-svc-ttt5d Mar 18 21:06:34.478: INFO: Got endpoints: latency-svc-ttt5d [717.678371ms] Mar 18 21:06:34.499: INFO: Created: latency-svc-9tbx2 Mar 18 21:06:34.514: INFO: Got endpoints: latency-svc-9tbx2 [675.19502ms] Mar 18 21:06:34.571: INFO: Created: latency-svc-fdxhv Mar 18 21:06:34.595: INFO: Got endpoints: latency-svc-fdxhv [715.425706ms] Mar 18 21:06:34.625: INFO: Created: latency-svc-sg8mx Mar 18 21:06:34.634: INFO: Got endpoints: latency-svc-sg8mx [716.563227ms] Mar 18 21:06:34.656: INFO: Created: latency-svc-qplhc Mar 18 21:06:34.665: INFO: Got endpoints: latency-svc-qplhc [677.456617ms] Mar 18 21:06:34.714: INFO: Created: latency-svc-kbpw2 Mar 18 21:06:34.731: INFO: Got endpoints: latency-svc-kbpw2 [693.414429ms] Mar 18 21:06:34.763: INFO: Created: latency-svc-dgmcs Mar 18 21:06:34.779: INFO: Got endpoints: latency-svc-dgmcs [705.948729ms] Mar 18 21:06:34.811: INFO: Created: latency-svc-rpmtk Mar 18 21:06:34.852: INFO: Got endpoints: latency-svc-rpmtk [718.762641ms] Mar 18 21:06:34.871: INFO: Created: latency-svc-vwl6n Mar 18 21:06:34.888: INFO: Got endpoints: latency-svc-vwl6n [718.11442ms] Mar 18 21:06:34.914: INFO: Created: latency-svc-sn6xt Mar 18 21:06:34.931: INFO: Got endpoints: latency-svc-sn6xt [725.031382ms] Mar 18 21:06:34.997: INFO: Created: latency-svc-t28bv Mar 18 21:06:35.002: INFO: Got endpoints: latency-svc-t28bv [709.682493ms] Mar 18 21:06:35.029: INFO: Created: latency-svc-kk8lq Mar 18 21:06:35.045: INFO: Got endpoints: latency-svc-kk8lq [712.601234ms] Mar 18 21:06:35.075: INFO: Created: latency-svc-hlgv8 Mar 18 21:06:35.147: INFO: Got endpoints: latency-svc-hlgv8 [783.929134ms] Mar 18 21:06:35.220: INFO: Created: latency-svc-xs84d Mar 18 21:06:35.266: INFO: Got endpoints: latency-svc-xs84d [843.923937ms] Mar 18 21:06:35.310: INFO: Created: latency-svc-7q22j Mar 18 21:06:35.333: INFO: Got endpoints: latency-svc-7q22j [891.291132ms] Mar 18 21:06:35.364: INFO: Created: latency-svc-p544s Mar 18 21:06:35.409: INFO: Got endpoints: latency-svc-p544s [931.537891ms] Mar 18 21:06:35.430: INFO: Created: latency-svc-x58v4 Mar 18 21:06:35.447: INFO: Got endpoints: latency-svc-x58v4 [933.332409ms] Mar 18 21:06:35.483: INFO: Created: latency-svc-mvprp Mar 18 21:06:35.502: INFO: Got endpoints: latency-svc-mvprp [906.258946ms] Mar 18 21:06:35.578: INFO: Created: latency-svc-cb7dg Mar 18 21:06:35.581: INFO: Got endpoints: latency-svc-cb7dg [946.356995ms] Mar 18 21:06:35.634: INFO: Created: latency-svc-2qzzv Mar 18 21:06:35.646: INFO: Got endpoints: latency-svc-2qzzv [980.801038ms] Mar 18 21:06:35.670: INFO: Created: latency-svc-g5f8g Mar 18 21:06:35.763: INFO: Got endpoints: latency-svc-g5f8g [1.032537794s] Mar 18 21:06:35.784: INFO: Created: latency-svc-v98fm Mar 18 21:06:35.796: INFO: Got endpoints: latency-svc-v98fm [1.016785239s] Mar 18 21:06:35.820: INFO: Created: latency-svc-cjfbg Mar 18 21:06:35.844: INFO: Got endpoints: latency-svc-cjfbg [991.303152ms] Mar 18 21:06:35.901: INFO: Created: latency-svc-nm7wb Mar 18 21:06:35.922: INFO: Got endpoints: latency-svc-nm7wb [1.03343647s] Mar 18 21:06:35.922: INFO: Created: latency-svc-bslrj Mar 18 21:06:35.936: INFO: Got endpoints: latency-svc-bslrj [1.004287413s] Mar 18 21:06:35.964: INFO: Created: latency-svc-ktq42 Mar 18 21:06:35.988: INFO: Got endpoints: latency-svc-ktq42 [985.327769ms] Mar 18 21:06:36.045: INFO: Created: latency-svc-m2w9k Mar 18 21:06:36.050: INFO: Got endpoints: latency-svc-m2w9k [1.005038725s] Mar 18 21:06:36.072: INFO: Created: latency-svc-6sg7v Mar 18 21:06:36.086: INFO: Got endpoints: latency-svc-6sg7v [939.293921ms] Mar 18 21:06:36.108: INFO: Created: latency-svc-cb89p Mar 18 21:06:36.132: INFO: Got endpoints: latency-svc-cb89p [865.941431ms] Mar 18 21:06:36.182: INFO: Created: latency-svc-l2lx4 Mar 18 21:06:36.188: INFO: Got endpoints: latency-svc-l2lx4 [855.052105ms] Mar 18 21:06:36.210: INFO: Created: latency-svc-vtkbl Mar 18 21:06:36.226: INFO: Got endpoints: latency-svc-vtkbl [816.633965ms] Mar 18 21:06:36.246: INFO: Created: latency-svc-7xzqb Mar 18 21:06:36.276: INFO: Got endpoints: latency-svc-7xzqb [828.296836ms] Mar 18 21:06:36.332: INFO: Created: latency-svc-ttjdd Mar 18 21:06:36.360: INFO: Got endpoints: latency-svc-ttjdd [857.874884ms] Mar 18 21:06:36.360: INFO: Created: latency-svc-cxhkv Mar 18 21:06:36.373: INFO: Got endpoints: latency-svc-cxhkv [792.233004ms] Mar 18 21:06:36.395: INFO: Created: latency-svc-d8nns Mar 18 21:06:36.410: INFO: Got endpoints: latency-svc-d8nns [764.33592ms] Mar 18 21:06:36.431: INFO: Created: latency-svc-8prbq Mar 18 21:06:36.495: INFO: Created: latency-svc-phs6f Mar 18 21:06:36.500: INFO: Got endpoints: latency-svc-8prbq [736.850933ms] Mar 18 21:06:36.500: INFO: Got endpoints: latency-svc-phs6f [704.137253ms] Mar 18 21:06:36.525: INFO: Created: latency-svc-md5js Mar 18 21:06:36.530: INFO: Got endpoints: latency-svc-md5js [686.566413ms] Mar 18 21:06:36.558: INFO: Created: latency-svc-jgw52 Mar 18 21:06:36.573: INFO: Got endpoints: latency-svc-jgw52 [651.562275ms] Mar 18 21:06:36.637: INFO: Created: latency-svc-cvbnk Mar 18 21:06:36.660: INFO: Got endpoints: latency-svc-cvbnk [723.976205ms] Mar 18 21:06:36.660: INFO: Created: latency-svc-775zk Mar 18 21:06:36.676: INFO: Got endpoints: latency-svc-775zk [687.870703ms] Mar 18 21:06:36.695: INFO: Created: latency-svc-bg96l Mar 18 21:06:36.712: INFO: Got endpoints: latency-svc-bg96l [661.264472ms] Mar 18 21:06:36.782: INFO: Created: latency-svc-7dxd7 Mar 18 21:06:36.791: INFO: Got endpoints: latency-svc-7dxd7 [704.787989ms] Mar 18 21:06:36.834: INFO: Created: latency-svc-68dqs Mar 18 21:06:36.850: INFO: Got endpoints: latency-svc-68dqs [718.815704ms] Mar 18 21:06:36.870: INFO: Created: latency-svc-lbcgb Mar 18 21:06:36.930: INFO: Got endpoints: latency-svc-lbcgb [741.546988ms] Mar 18 21:06:36.932: INFO: Created: latency-svc-j2dk6 Mar 18 21:06:36.941: INFO: Got endpoints: latency-svc-j2dk6 [714.362652ms] Mar 18 21:06:36.983: INFO: Created: latency-svc-2gbwc Mar 18 21:06:36.995: INFO: Got endpoints: latency-svc-2gbwc [719.522546ms] Mar 18 21:06:37.019: INFO: Created: latency-svc-nrmm8 Mar 18 21:06:37.062: INFO: Got endpoints: latency-svc-nrmm8 [702.311601ms] Mar 18 21:06:37.064: INFO: Created: latency-svc-2fhbc Mar 18 21:06:37.074: INFO: Got endpoints: latency-svc-2fhbc [700.535965ms] Mar 18 21:06:37.098: INFO: Created: latency-svc-h4mp5 Mar 18 21:06:37.110: INFO: Got endpoints: latency-svc-h4mp5 [700.07139ms] Mar 18 21:06:37.127: INFO: Created: latency-svc-82c4w Mar 18 21:06:37.140: INFO: Got endpoints: latency-svc-82c4w [640.038729ms] Mar 18 21:06:37.207: INFO: Created: latency-svc-7rh8x Mar 18 21:06:37.209: INFO: Got endpoints: latency-svc-7rh8x [708.877904ms] Mar 18 21:06:37.235: INFO: Created: latency-svc-9xzjj Mar 18 21:06:37.252: INFO: Got endpoints: latency-svc-9xzjj [721.442015ms] Mar 18 21:06:37.277: INFO: Created: latency-svc-w9btp Mar 18 21:06:37.291: INFO: Got endpoints: latency-svc-w9btp [717.946733ms] Mar 18 21:06:37.338: INFO: Created: latency-svc-sbpmg Mar 18 21:06:37.379: INFO: Got endpoints: latency-svc-sbpmg [719.304533ms] Mar 18 21:06:37.482: INFO: Created: latency-svc-t4k5v Mar 18 21:06:37.484: INFO: Got endpoints: latency-svc-t4k5v [808.54103ms] Mar 18 21:06:37.511: INFO: Created: latency-svc-zm2l9 Mar 18 21:06:37.526: INFO: Got endpoints: latency-svc-zm2l9 [814.42383ms] Mar 18 21:06:37.554: INFO: Created: latency-svc-kkr2k Mar 18 21:06:37.655: INFO: Got endpoints: latency-svc-kkr2k [863.424238ms] Mar 18 21:06:37.657: INFO: Created: latency-svc-vqnrc Mar 18 21:06:37.661: INFO: Got endpoints: latency-svc-vqnrc [810.132973ms] Mar 18 21:06:37.688: INFO: Created: latency-svc-4l2d7 Mar 18 21:06:37.695: INFO: Got endpoints: latency-svc-4l2d7 [764.458934ms] Mar 18 21:06:37.715: INFO: Created: latency-svc-zbtrs Mar 18 21:06:37.731: INFO: Got endpoints: latency-svc-zbtrs [790.470612ms] Mar 18 21:06:37.751: INFO: Created: latency-svc-mfrzb Mar 18 21:06:37.786: INFO: Got endpoints: latency-svc-mfrzb [791.21608ms] Mar 18 21:06:37.793: INFO: Created: latency-svc-kzbhz Mar 18 21:06:37.810: INFO: Got endpoints: latency-svc-kzbhz [747.917124ms] Mar 18 21:06:37.829: INFO: Created: latency-svc-v49qt Mar 18 21:06:37.846: INFO: Got endpoints: latency-svc-v49qt [772.639633ms] Mar 18 21:06:37.865: INFO: Created: latency-svc-4ktwn Mar 18 21:06:37.876: INFO: Got endpoints: latency-svc-4ktwn [766.049921ms] Mar 18 21:06:37.919: INFO: Created: latency-svc-88b9n Mar 18 21:06:37.921: INFO: Got endpoints: latency-svc-88b9n [780.680428ms] Mar 18 21:06:37.921: INFO: Latencies: [94.285256ms 123.373124ms 151.627397ms 153.977859ms 189.864374ms 256.665442ms 279.948293ms 322.376514ms 360.717937ms 406.620293ms 483.151618ms 503.451588ms 533.470111ms 563.138271ms 568.651957ms 588.157883ms 589.49407ms 604.943277ms 607.021285ms 608.605487ms 613.79453ms 615.594346ms 615.698241ms 620.2883ms 620.589673ms 630.108076ms 640.038729ms 641.672404ms 651.562275ms 661.264472ms 664.524471ms 675.19502ms 677.456617ms 686.566413ms 687.870703ms 693.414429ms 699.611572ms 700.07139ms 700.535965ms 701.025788ms 702.311601ms 704.137253ms 704.787989ms 705.948729ms 707.045324ms 708.877904ms 709.682493ms 712.601234ms 714.362652ms 715.425706ms 716.563227ms 717.678371ms 717.946733ms 718.11442ms 718.741929ms 718.762641ms 718.815704ms 719.304533ms 719.522546ms 721.442015ms 723.976205ms 725.031382ms 733.669137ms 735.732213ms 736.379192ms 736.850933ms 737.224384ms 737.633992ms 738.212524ms 741.546988ms 741.805299ms 742.769784ms 744.165489ms 747.012007ms 747.917124ms 748.351902ms 751.35602ms 753.862235ms 754.226026ms 754.463705ms 754.975988ms 755.950761ms 760.31661ms 761.740401ms 761.93808ms 762.813516ms 763.65222ms 764.233111ms 764.33592ms 764.458934ms 765.471306ms 766.049921ms 766.050064ms 766.297145ms 766.786337ms 768.817067ms 769.030828ms 769.08189ms 772.266267ms 772.639633ms 773.63825ms 776.716726ms 777.044897ms 778.337943ms 778.496951ms 778.950284ms 780.680428ms 783.929134ms 783.940152ms 784.264639ms 785.234518ms 785.332133ms 788.55875ms 790.470612ms 791.21608ms 792.233004ms 794.747474ms 794.927647ms 795.326135ms 797.367237ms 802.454048ms 805.018244ms 807.053409ms 807.228699ms 808.54103ms 810.132973ms 810.312443ms 811.242714ms 813.086691ms 814.42383ms 816.633965ms 817.991654ms 819.859727ms 828.296836ms 832.218506ms 838.238375ms 838.936871ms 843.923937ms 844.202282ms 847.348225ms 847.679774ms 847.998903ms 848.63479ms 849.26071ms 851.407032ms 853.26707ms 855.052105ms 857.874884ms 858.766307ms 859.111149ms 861.313147ms 863.424238ms 865.941431ms 867.730505ms 869.333393ms 882.619211ms 887.593484ms 888.785837ms 889.683672ms 890.014129ms 890.869315ms 891.291132ms 892.998453ms 893.095822ms 896.938311ms 900.055627ms 901.931027ms 906.258946ms 906.277209ms 911.256014ms 913.859417ms 915.155739ms 915.783344ms 920.655701ms 920.875549ms 921.182041ms 921.441992ms 921.950291ms 927.65114ms 927.992675ms 931.022712ms 931.537891ms 932.925346ms 933.332409ms 939.293921ms 941.773578ms 946.35208ms 946.356995ms 948.028689ms 956.855186ms 957.575098ms 960.674802ms 980.801038ms 985.327769ms 991.303152ms 1.004287413s 1.005038725s 1.016785239s 1.032537794s 1.03343647s] Mar 18 21:06:37.921: INFO: 50 %ile: 773.63825ms Mar 18 21:06:37.921: INFO: 90 %ile: 931.022712ms Mar 18 21:06:37.921: INFO: 99 %ile: 1.032537794s Mar 18 21:06:37.921: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:06:37.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9576" for this suite. • [SLOW TEST:13.776 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":2,"skipped":14,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:06:37.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5840 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5840 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5840 Mar 18 21:06:38.050: INFO: Found 0 stateful pods, waiting for 1 Mar 18 21:06:48.074: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 18 21:06:48.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5840 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 21:06:51.263: INFO: stderr: "I0318 21:06:51.131657 30 log.go:172] (0xc0008acbb0) (0xc0008ea140) Create stream\nI0318 21:06:51.131707 30 log.go:172] (0xc0008acbb0) (0xc0008ea140) Stream added, broadcasting: 1\nI0318 21:06:51.134259 30 log.go:172] (0xc0008acbb0) Reply frame received for 1\nI0318 21:06:51.134287 30 log.go:172] (0xc0008acbb0) (0xc00064f9a0) Create stream\nI0318 21:06:51.134294 30 log.go:172] (0xc0008acbb0) (0xc00064f9a0) Stream added, broadcasting: 3\nI0318 21:06:51.135112 30 log.go:172] (0xc0008acbb0) Reply frame received for 3\nI0318 21:06:51.135141 30 log.go:172] (0xc0008acbb0) (0xc00064fc20) Create stream\nI0318 21:06:51.135160 30 log.go:172] (0xc0008acbb0) (0xc00064fc20) Stream added, broadcasting: 5\nI0318 21:06:51.135928 30 log.go:172] (0xc0008acbb0) Reply frame received for 5\nI0318 21:06:51.202941 30 log.go:172] (0xc0008acbb0) Data frame received for 5\nI0318 21:06:51.202960 30 log.go:172] (0xc00064fc20) (5) Data frame handling\nI0318 21:06:51.202968 30 log.go:172] (0xc00064fc20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 21:06:51.255216 30 log.go:172] (0xc0008acbb0) Data frame received for 3\nI0318 21:06:51.255261 30 log.go:172] (0xc00064f9a0) (3) Data frame handling\nI0318 21:06:51.255298 30 log.go:172] (0xc00064f9a0) (3) Data frame sent\nI0318 21:06:51.255472 30 log.go:172] (0xc0008acbb0) Data frame received for 3\nI0318 21:06:51.255506 30 log.go:172] (0xc00064f9a0) (3) Data frame handling\nI0318 21:06:51.255548 30 log.go:172] (0xc0008acbb0) Data frame received for 5\nI0318 21:06:51.255578 30 log.go:172] (0xc00064fc20) (5) Data frame handling\nI0318 21:06:51.259725 30 log.go:172] (0xc0008acbb0) Data frame received for 1\nI0318 21:06:51.259753 30 log.go:172] (0xc0008ea140) (1) Data frame handling\nI0318 21:06:51.259768 30 log.go:172] (0xc0008ea140) (1) Data frame sent\nI0318 21:06:51.259784 30 log.go:172] (0xc0008acbb0) (0xc0008ea140) Stream removed, broadcasting: 1\nI0318 21:06:51.259841 30 log.go:172] (0xc0008acbb0) Go away received\nI0318 21:06:51.260098 30 log.go:172] (0xc0008acbb0) (0xc0008ea140) Stream removed, broadcasting: 1\nI0318 21:06:51.260120 30 log.go:172] (0xc0008acbb0) (0xc00064f9a0) Stream removed, broadcasting: 3\nI0318 21:06:51.260141 30 log.go:172] (0xc0008acbb0) (0xc00064fc20) Stream removed, broadcasting: 5\n" Mar 18 21:06:51.263: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 21:06:51.263: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 18 21:06:51.271: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 18 21:07:01.276: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 21:07:01.276: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 21:07:01.294: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 21:07:01.294: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC }] Mar 18 21:07:01.294: INFO: Mar 18 21:07:01.294: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 18 21:07:02.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991311607s Mar 18 21:07:03.304: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986670114s Mar 18 21:07:04.332: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981707924s Mar 18 21:07:05.337: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953253468s Mar 18 21:07:06.342: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.948605236s Mar 18 21:07:07.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.943715985s Mar 18 21:07:08.351: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.939649768s Mar 18 21:07:09.356: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.934438727s Mar 18 21:07:10.362: INFO: Verifying statefulset ss doesn't scale past 3 for another 929.102504ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5840 Mar 18 21:07:11.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5840 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 18 21:07:11.595: INFO: stderr: "I0318 21:07:11.508575 60 log.go:172] (0xc0001053f0) (0xc0009d0000) Create stream\nI0318 21:07:11.508641 60 log.go:172] (0xc0001053f0) (0xc0009d0000) Stream added, broadcasting: 1\nI0318 21:07:11.511606 60 log.go:172] (0xc0001053f0) Reply frame received for 1\nI0318 21:07:11.511650 60 log.go:172] (0xc0001053f0) (0xc00063bc20) Create stream\nI0318 21:07:11.511667 60 log.go:172] (0xc0001053f0) (0xc00063bc20) Stream added, broadcasting: 3\nI0318 21:07:11.513001 60 log.go:172] (0xc0001053f0) Reply frame received for 3\nI0318 21:07:11.513046 60 log.go:172] (0xc0001053f0) (0xc0008ea000) Create stream\nI0318 21:07:11.513061 60 log.go:172] (0xc0001053f0) (0xc0008ea000) Stream added, broadcasting: 5\nI0318 21:07:11.514185 60 log.go:172] (0xc0001053f0) Reply frame received for 5\nI0318 21:07:11.589362 60 log.go:172] (0xc0001053f0) Data frame received for 3\nI0318 21:07:11.589409 60 log.go:172] (0xc00063bc20) (3) Data frame handling\nI0318 21:07:11.589431 60 log.go:172] (0xc00063bc20) (3) Data frame sent\nI0318 21:07:11.589441 60 log.go:172] (0xc0001053f0) Data frame received for 3\nI0318 21:07:11.589449 60 log.go:172] (0xc00063bc20) (3) Data frame handling\nI0318 21:07:11.589495 60 log.go:172] (0xc0001053f0) Data frame received for 5\nI0318 21:07:11.589536 60 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0318 21:07:11.589580 60 log.go:172] (0xc0008ea000) (5) Data frame sent\nI0318 21:07:11.589603 60 log.go:172] (0xc0001053f0) Data frame received for 5\nI0318 21:07:11.589620 60 log.go:172] (0xc0008ea000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0318 21:07:11.591041 60 log.go:172] (0xc0001053f0) Data frame received for 1\nI0318 21:07:11.591060 60 log.go:172] (0xc0009d0000) (1) Data frame handling\nI0318 21:07:11.591075 60 log.go:172] (0xc0009d0000) (1) Data frame sent\nI0318 21:07:11.591091 60 log.go:172] (0xc0001053f0) (0xc0009d0000) Stream removed, broadcasting: 1\nI0318 21:07:11.591153 60 log.go:172] (0xc0001053f0) Go away received\nI0318 21:07:11.591516 60 log.go:172] (0xc0001053f0) (0xc0009d0000) Stream removed, broadcasting: 1\nI0318 21:07:11.591535 60 log.go:172] (0xc0001053f0) (0xc00063bc20) Stream removed, broadcasting: 3\nI0318 21:07:11.591545 60 log.go:172] (0xc0001053f0) (0xc0008ea000) Stream removed, broadcasting: 5\n" Mar 18 21:07:11.595: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 18 21:07:11.595: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 18 21:07:11.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5840 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 18 21:07:11.792: INFO: stderr: "I0318 21:07:11.722919 83 log.go:172] (0xc00098ca50) (0xc000645b80) Create stream\nI0318 21:07:11.722984 83 log.go:172] (0xc00098ca50) (0xc000645b80) Stream added, broadcasting: 1\nI0318 21:07:11.727322 83 log.go:172] (0xc00098ca50) Reply frame received for 1\nI0318 21:07:11.727402 83 log.go:172] (0xc00098ca50) (0xc000614000) Create stream\nI0318 21:07:11.727435 83 log.go:172] (0xc00098ca50) (0xc000614000) Stream added, broadcasting: 3\nI0318 21:07:11.730406 83 log.go:172] (0xc00098ca50) Reply frame received for 3\nI0318 21:07:11.730444 83 log.go:172] (0xc00098ca50) (0xc000614140) Create stream\nI0318 21:07:11.730461 83 log.go:172] (0xc00098ca50) (0xc000614140) Stream added, broadcasting: 5\nI0318 21:07:11.732006 83 log.go:172] (0xc00098ca50) Reply frame received for 5\nI0318 21:07:11.785656 83 log.go:172] (0xc00098ca50) Data frame received for 3\nI0318 21:07:11.785673 83 log.go:172] (0xc000614000) (3) Data frame handling\nI0318 21:07:11.785680 83 log.go:172] (0xc000614000) (3) Data frame sent\nI0318 21:07:11.785685 83 log.go:172] (0xc00098ca50) Data frame received for 3\nI0318 21:07:11.785689 83 log.go:172] (0xc000614000) (3) Data frame handling\nI0318 21:07:11.785932 83 log.go:172] (0xc00098ca50) Data frame received for 5\nI0318 21:07:11.785943 83 log.go:172] (0xc000614140) (5) Data frame handling\nI0318 21:07:11.785949 83 log.go:172] (0xc000614140) (5) Data frame sent\nI0318 21:07:11.785954 83 log.go:172] (0xc00098ca50) Data frame received for 5\nI0318 21:07:11.785958 83 log.go:172] (0xc000614140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0318 21:07:11.787954 83 log.go:172] (0xc00098ca50) Data frame received for 1\nI0318 21:07:11.787993 83 log.go:172] (0xc000645b80) (1) Data frame handling\nI0318 21:07:11.788035 83 log.go:172] (0xc000645b80) (1) Data frame sent\nI0318 21:07:11.788235 83 log.go:172] (0xc00098ca50) (0xc000645b80) Stream removed, broadcasting: 1\nI0318 21:07:11.788348 83 log.go:172] (0xc00098ca50) Go away received\nI0318 21:07:11.788706 83 log.go:172] (0xc00098ca50) (0xc000645b80) Stream removed, broadcasting: 1\nI0318 21:07:11.788737 83 log.go:172] (0xc00098ca50) (0xc000614000) Stream removed, broadcasting: 3\nI0318 21:07:11.788758 83 log.go:172] (0xc00098ca50) (0xc000614140) Stream removed, broadcasting: 5\n" Mar 18 21:07:11.793: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 18 21:07:11.793: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 18 21:07:11.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5840 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 18 21:07:11.978: INFO: stderr: "I0318 21:07:11.912835 105 log.go:172] (0xc000105340) (0xc0006e5ea0) Create stream\nI0318 21:07:11.912918 105 log.go:172] (0xc000105340) (0xc0006e5ea0) Stream added, broadcasting: 1\nI0318 21:07:11.916018 105 log.go:172] (0xc000105340) Reply frame received for 1\nI0318 21:07:11.916061 105 log.go:172] (0xc000105340) (0xc00068c780) Create stream\nI0318 21:07:11.916074 105 log.go:172] (0xc000105340) (0xc00068c780) Stream added, broadcasting: 3\nI0318 21:07:11.917299 105 log.go:172] (0xc000105340) Reply frame received for 3\nI0318 21:07:11.917352 105 log.go:172] (0xc000105340) (0xc0006e5f40) Create stream\nI0318 21:07:11.917363 105 log.go:172] (0xc000105340) (0xc0006e5f40) Stream added, broadcasting: 5\nI0318 21:07:11.918408 105 log.go:172] (0xc000105340) Reply frame received for 5\nI0318 21:07:11.972013 105 log.go:172] (0xc000105340) Data frame received for 5\nI0318 21:07:11.972057 105 log.go:172] (0xc0006e5f40) (5) Data frame handling\nI0318 21:07:11.972099 105 log.go:172] (0xc0006e5f40) (5) Data frame sent\nI0318 21:07:11.972187 105 log.go:172] (0xc000105340) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0318 21:07:11.972210 105 log.go:172] (0xc00068c780) (3) Data frame handling\nI0318 21:07:11.972233 105 log.go:172] (0xc00068c780) (3) Data frame sent\nI0318 21:07:11.972252 105 log.go:172] (0xc000105340) Data frame received for 3\nI0318 21:07:11.972309 105 log.go:172] (0xc00068c780) (3) Data frame handling\nI0318 21:07:11.972535 105 log.go:172] (0xc000105340) Data frame received for 5\nI0318 21:07:11.972560 105 log.go:172] (0xc0006e5f40) (5) Data frame handling\nI0318 21:07:11.974355 105 log.go:172] (0xc000105340) Data frame received for 1\nI0318 21:07:11.974383 105 log.go:172] (0xc0006e5ea0) (1) Data frame handling\nI0318 21:07:11.974430 105 log.go:172] (0xc0006e5ea0) (1) Data frame sent\nI0318 21:07:11.974545 105 log.go:172] (0xc000105340) (0xc0006e5ea0) Stream removed, broadcasting: 1\nI0318 21:07:11.974591 105 log.go:172] (0xc000105340) Go away received\nI0318 21:07:11.975011 105 log.go:172] (0xc000105340) (0xc0006e5ea0) Stream removed, broadcasting: 1\nI0318 21:07:11.975035 105 log.go:172] (0xc000105340) (0xc00068c780) Stream removed, broadcasting: 3\nI0318 21:07:11.975048 105 log.go:172] (0xc000105340) (0xc0006e5f40) Stream removed, broadcasting: 5\n" Mar 18 21:07:11.979: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 18 21:07:11.979: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 18 21:07:11.983: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 18 21:07:21.988: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 21:07:21.988: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 21:07:21.988: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 18 21:07:21.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5840 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 21:07:22.211: INFO: stderr: "I0318 21:07:22.125398 127 log.go:172] (0xc000106bb0) (0xc000691f40) Create stream\nI0318 21:07:22.125447 127 log.go:172] (0xc000106bb0) (0xc000691f40) Stream added, broadcasting: 1\nI0318 21:07:22.127745 127 log.go:172] (0xc000106bb0) Reply frame received for 1\nI0318 21:07:22.127783 127 log.go:172] (0xc000106bb0) (0xc000610820) Create stream\nI0318 21:07:22.127793 127 log.go:172] (0xc000106bb0) (0xc000610820) Stream added, broadcasting: 3\nI0318 21:07:22.128797 127 log.go:172] (0xc000106bb0) Reply frame received for 3\nI0318 21:07:22.128828 127 log.go:172] (0xc000106bb0) (0xc0003555e0) Create stream\nI0318 21:07:22.128837 127 log.go:172] (0xc000106bb0) (0xc0003555e0) Stream added, broadcasting: 5\nI0318 21:07:22.129953 127 log.go:172] (0xc000106bb0) Reply frame received for 5\nI0318 21:07:22.205373 127 log.go:172] (0xc000106bb0) Data frame received for 3\nI0318 21:07:22.205394 127 log.go:172] (0xc000610820) (3) Data frame handling\nI0318 21:07:22.205407 127 log.go:172] (0xc000610820) (3) Data frame sent\nI0318 21:07:22.205416 127 log.go:172] (0xc000106bb0) Data frame received for 3\nI0318 21:07:22.205424 127 log.go:172] (0xc000610820) (3) Data frame handling\nI0318 21:07:22.205814 127 log.go:172] (0xc000106bb0) Data frame received for 5\nI0318 21:07:22.205850 127 log.go:172] (0xc0003555e0) (5) Data frame handling\nI0318 21:07:22.205884 127 log.go:172] (0xc0003555e0) (5) Data frame sent\nI0318 21:07:22.205902 127 log.go:172] (0xc000106bb0) Data frame received for 5\nI0318 21:07:22.205917 127 log.go:172] (0xc0003555e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 21:07:22.207841 127 log.go:172] (0xc000106bb0) Data frame received for 1\nI0318 21:07:22.207874 127 log.go:172] (0xc000691f40) (1) Data frame handling\nI0318 21:07:22.207891 127 log.go:172] (0xc000691f40) (1) Data frame sent\nI0318 21:07:22.207908 127 log.go:172] (0xc000106bb0) (0xc000691f40) Stream removed, broadcasting: 1\nI0318 21:07:22.207929 127 log.go:172] (0xc000106bb0) Go away received\nI0318 21:07:22.208243 127 log.go:172] (0xc000106bb0) (0xc000691f40) Stream removed, broadcasting: 1\nI0318 21:07:22.208267 127 log.go:172] (0xc000106bb0) (0xc000610820) Stream removed, broadcasting: 3\nI0318 21:07:22.208279 127 log.go:172] (0xc000106bb0) (0xc0003555e0) Stream removed, broadcasting: 5\n" Mar 18 21:07:22.212: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 21:07:22.212: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 18 21:07:22.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5840 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 21:07:22.443: INFO: stderr: "I0318 21:07:22.339833 149 log.go:172] (0xc0009ab6b0) (0xc0009d06e0) Create stream\nI0318 21:07:22.339893 149 log.go:172] (0xc0009ab6b0) (0xc0009d06e0) Stream added, broadcasting: 1\nI0318 21:07:22.345365 149 log.go:172] (0xc0009ab6b0) Reply frame received for 1\nI0318 21:07:22.345410 149 log.go:172] (0xc0009ab6b0) (0xc00061c780) Create stream\nI0318 21:07:22.345424 149 log.go:172] (0xc0009ab6b0) (0xc00061c780) Stream added, broadcasting: 3\nI0318 21:07:22.346396 149 log.go:172] (0xc0009ab6b0) Reply frame received for 3\nI0318 21:07:22.346434 149 log.go:172] (0xc0009ab6b0) (0xc0003a1540) Create stream\nI0318 21:07:22.346449 149 log.go:172] (0xc0009ab6b0) (0xc0003a1540) Stream added, broadcasting: 5\nI0318 21:07:22.347500 149 log.go:172] (0xc0009ab6b0) Reply frame received for 5\nI0318 21:07:22.408877 149 log.go:172] (0xc0009ab6b0) Data frame received for 5\nI0318 21:07:22.408902 149 log.go:172] (0xc0003a1540) (5) Data frame handling\nI0318 21:07:22.408921 149 log.go:172] (0xc0003a1540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 21:07:22.436798 149 log.go:172] (0xc0009ab6b0) Data frame received for 3\nI0318 21:07:22.436923 149 log.go:172] (0xc00061c780) (3) Data frame handling\nI0318 21:07:22.437016 149 log.go:172] (0xc00061c780) (3) Data frame sent\nI0318 21:07:22.437023 149 log.go:172] (0xc0009ab6b0) Data frame received for 3\nI0318 21:07:22.437028 149 log.go:172] (0xc00061c780) (3) Data frame handling\nI0318 21:07:22.437037 149 log.go:172] (0xc0009ab6b0) Data frame received for 5\nI0318 21:07:22.437042 149 log.go:172] (0xc0003a1540) (5) Data frame handling\nI0318 21:07:22.439349 149 log.go:172] (0xc0009ab6b0) Data frame received for 1\nI0318 21:07:22.439383 149 log.go:172] (0xc0009d06e0) (1) Data frame handling\nI0318 21:07:22.439404 149 log.go:172] (0xc0009d06e0) (1) Data frame sent\nI0318 21:07:22.439428 149 log.go:172] (0xc0009ab6b0) (0xc0009d06e0) Stream removed, broadcasting: 1\nI0318 21:07:22.439586 149 log.go:172] (0xc0009ab6b0) Go away received\nI0318 21:07:22.439935 149 log.go:172] (0xc0009ab6b0) (0xc0009d06e0) Stream removed, broadcasting: 1\nI0318 21:07:22.439960 149 log.go:172] (0xc0009ab6b0) (0xc00061c780) Stream removed, broadcasting: 3\nI0318 21:07:22.439971 149 log.go:172] (0xc0009ab6b0) (0xc0003a1540) Stream removed, broadcasting: 5\n" Mar 18 21:07:22.444: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 21:07:22.444: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 18 21:07:22.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5840 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 21:07:22.713: INFO: stderr: "I0318 21:07:22.570942 170 log.go:172] (0xc000b2e160) (0xc000565360) Create stream\nI0318 21:07:22.571001 170 log.go:172] (0xc000b2e160) (0xc000565360) Stream added, broadcasting: 1\nI0318 21:07:22.574525 170 log.go:172] (0xc000b2e160) Reply frame received for 1\nI0318 21:07:22.574573 170 log.go:172] (0xc000b2e160) (0xc00064da40) Create stream\nI0318 21:07:22.574588 170 log.go:172] (0xc000b2e160) (0xc00064da40) Stream added, broadcasting: 3\nI0318 21:07:22.575715 170 log.go:172] (0xc000b2e160) Reply frame received for 3\nI0318 21:07:22.575755 170 log.go:172] (0xc000b2e160) (0xc0009ee000) Create stream\nI0318 21:07:22.575764 170 log.go:172] (0xc000b2e160) (0xc0009ee000) Stream added, broadcasting: 5\nI0318 21:07:22.576783 170 log.go:172] (0xc000b2e160) Reply frame received for 5\nI0318 21:07:22.651537 170 log.go:172] (0xc000b2e160) Data frame received for 5\nI0318 21:07:22.651560 170 log.go:172] (0xc0009ee000) (5) Data frame handling\nI0318 21:07:22.651576 170 log.go:172] (0xc0009ee000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 21:07:22.706545 170 log.go:172] (0xc000b2e160) Data frame received for 3\nI0318 21:07:22.706581 170 log.go:172] (0xc00064da40) (3) Data frame handling\nI0318 21:07:22.706614 170 log.go:172] (0xc00064da40) (3) Data frame sent\nI0318 21:07:22.706646 170 log.go:172] (0xc000b2e160) Data frame received for 3\nI0318 21:07:22.706670 170 log.go:172] (0xc00064da40) (3) Data frame handling\nI0318 21:07:22.706842 170 log.go:172] (0xc000b2e160) Data frame received for 5\nI0318 21:07:22.706885 170 log.go:172] (0xc0009ee000) (5) Data frame handling\nI0318 21:07:22.708665 170 log.go:172] (0xc000b2e160) Data frame received for 1\nI0318 21:07:22.708699 170 log.go:172] (0xc000565360) (1) Data frame handling\nI0318 21:07:22.708731 170 log.go:172] (0xc000565360) (1) Data frame sent\nI0318 21:07:22.708769 170 log.go:172] (0xc000b2e160) (0xc000565360) Stream removed, broadcasting: 1\nI0318 21:07:22.708805 170 log.go:172] (0xc000b2e160) Go away received\nI0318 21:07:22.709283 170 log.go:172] (0xc000b2e160) (0xc000565360) Stream removed, broadcasting: 1\nI0318 21:07:22.709316 170 log.go:172] (0xc000b2e160) (0xc00064da40) Stream removed, broadcasting: 3\nI0318 21:07:22.709330 170 log.go:172] (0xc000b2e160) (0xc0009ee000) Stream removed, broadcasting: 5\n" Mar 18 21:07:22.713: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 21:07:22.713: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 18 21:07:22.713: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 21:07:22.717: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 18 21:07:32.726: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 21:07:32.726: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 18 21:07:32.726: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 18 21:07:32.743: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 21:07:32.743: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC }] Mar 18 21:07:32.743: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC }] Mar 18 21:07:32.743: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC }] Mar 18 21:07:32.743: INFO: Mar 18 21:07:32.743: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 21:07:33.748: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 21:07:33.748: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC }] Mar 18 21:07:33.748: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC }] Mar 18 21:07:33.748: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC }] Mar 18 21:07:33.748: INFO: Mar 18 21:07:33.748: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 21:07:34.764: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 21:07:34.764: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC }] Mar 18 21:07:34.764: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC }] Mar 18 21:07:34.764: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC }] Mar 18 21:07:34.764: INFO: Mar 18 21:07:34.764: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 21:07:35.792: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 21:07:35.792: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC }] Mar 18 21:07:35.792: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:01 +0000 UTC }] Mar 18 21:07:35.792: INFO: Mar 18 21:07:35.792: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 18 21:07:36.797: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 21:07:36.797: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC }] Mar 18 21:07:36.797: INFO: Mar 18 21:07:36.797: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 18 21:07:37.801: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 21:07:37.801: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC }] Mar 18 21:07:37.801: INFO: Mar 18 21:07:37.801: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 18 21:07:38.806: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 21:07:38.806: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:07:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 21:06:38 +0000 UTC }] Mar 18 21:07:38.806: INFO: Mar 18 21:07:38.806: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 18 21:07:39.815: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.925563516s Mar 18 21:07:40.819: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.91699953s Mar 18 21:07:41.827: INFO: Verifying statefulset ss doesn't scale past 0 for another 912.476128ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5840 Mar 18 21:07:42.831: INFO: Scaling statefulset ss to 0 Mar 18 21:07:42.841: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 18 21:07:42.844: INFO: Deleting all statefulset in ns statefulset-5840 Mar 18 21:07:42.847: INFO: Scaling statefulset ss to 0 Mar 18 21:07:42.856: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 21:07:42.858: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:07:42.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5840" for this suite. • [SLOW TEST:64.962 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":3,"skipped":14,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:07:42.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-ff9c90c0-eb88-413d-93fd-1a7b1ae46be8 STEP: Creating a pod to test consume configMaps Mar 18 21:07:42.955: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b9feffb-c61b-4b1f-bc9e-0ae9419270c5" in namespace "configmap-3678" to be "success or failure" Mar 18 21:07:42.959: INFO: Pod "pod-configmaps-0b9feffb-c61b-4b1f-bc9e-0ae9419270c5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.731744ms Mar 18 21:07:44.963: INFO: Pod "pod-configmaps-0b9feffb-c61b-4b1f-bc9e-0ae9419270c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008158551s Mar 18 21:07:46.967: INFO: Pod "pod-configmaps-0b9feffb-c61b-4b1f-bc9e-0ae9419270c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012649597s STEP: Saw pod success Mar 18 21:07:46.968: INFO: Pod "pod-configmaps-0b9feffb-c61b-4b1f-bc9e-0ae9419270c5" satisfied condition "success or failure" Mar 18 21:07:46.971: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0b9feffb-c61b-4b1f-bc9e-0ae9419270c5 container configmap-volume-test: STEP: delete the pod Mar 18 21:07:47.018: INFO: Waiting for pod pod-configmaps-0b9feffb-c61b-4b1f-bc9e-0ae9419270c5 to disappear Mar 18 21:07:47.051: INFO: Pod pod-configmaps-0b9feffb-c61b-4b1f-bc9e-0ae9419270c5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:07:47.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3678" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:07:47.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-226.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-226.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 21:07:53.178: INFO: DNS probes using dns-226/dns-test-0f59e720-d509-4a1f-8c32-4fbf86d56055 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:07:53.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-226" for this suite. • [SLOW TEST:6.223 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":5,"skipped":43,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:07:53.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-d126b7e5-702b-4912-875a-9d40549ac764 STEP: Creating a pod to test consume secrets Mar 18 21:07:53.362: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f04106c3-3250-4ab8-ab78-499e73334d30" in namespace "projected-8030" to be "success or failure" Mar 18 21:07:53.366: INFO: Pod "pod-projected-secrets-f04106c3-3250-4ab8-ab78-499e73334d30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006502ms Mar 18 21:07:55.371: INFO: Pod "pod-projected-secrets-f04106c3-3250-4ab8-ab78-499e73334d30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008120911s Mar 18 21:07:57.375: INFO: Pod "pod-projected-secrets-f04106c3-3250-4ab8-ab78-499e73334d30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01268051s STEP: Saw pod success Mar 18 21:07:57.375: INFO: Pod "pod-projected-secrets-f04106c3-3250-4ab8-ab78-499e73334d30" satisfied condition "success or failure" Mar 18 21:07:57.378: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-f04106c3-3250-4ab8-ab78-499e73334d30 container projected-secret-volume-test: STEP: delete the pod Mar 18 21:07:57.442: INFO: Waiting for pod pod-projected-secrets-f04106c3-3250-4ab8-ab78-499e73334d30 to disappear Mar 18 21:07:57.450: INFO: Pod pod-projected-secrets-f04106c3-3250-4ab8-ab78-499e73334d30 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:07:57.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8030" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":43,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:07:57.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:07:57.524: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:08:01.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7830" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":44,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:08:01.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:08:32.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5719" for this suite. • [SLOW TEST:30.609 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":51,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:08:32.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:08:49.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-613" for this suite. • [SLOW TEST:17.149 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":9,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:08:49.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 18 21:08:50.273: INFO: Pod name wrapped-volume-race-52e57196-ad29-4073-9889-44a6b527e263: Found 0 pods out of 5 Mar 18 21:08:55.281: INFO: Pod name wrapped-volume-race-52e57196-ad29-4073-9889-44a6b527e263: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-52e57196-ad29-4073-9889-44a6b527e263 in namespace emptydir-wrapper-5406, will wait for the garbage collector to delete the pods Mar 18 21:09:07.363: INFO: Deleting ReplicationController wrapped-volume-race-52e57196-ad29-4073-9889-44a6b527e263 took: 6.895303ms Mar 18 21:09:07.763: INFO: Terminating ReplicationController wrapped-volume-race-52e57196-ad29-4073-9889-44a6b527e263 pods took: 400.318552ms STEP: Creating RC which spawns configmap-volume pods Mar 18 21:09:20.639: INFO: Pod name wrapped-volume-race-dbfd5393-7134-4922-bd78-21b68c9d1eb6: Found 0 pods out of 5 Mar 18 21:09:25.647: INFO: Pod name wrapped-volume-race-dbfd5393-7134-4922-bd78-21b68c9d1eb6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-dbfd5393-7134-4922-bd78-21b68c9d1eb6 in namespace emptydir-wrapper-5406, will wait for the garbage collector to delete the pods Mar 18 21:09:39.733: INFO: Deleting ReplicationController wrapped-volume-race-dbfd5393-7134-4922-bd78-21b68c9d1eb6 took: 6.634304ms Mar 18 21:09:40.134: INFO: Terminating ReplicationController wrapped-volume-race-dbfd5393-7134-4922-bd78-21b68c9d1eb6 pods took: 400.335071ms STEP: Creating RC which spawns configmap-volume pods Mar 18 21:09:49.974: INFO: Pod name wrapped-volume-race-427740af-69a5-47c6-85e0-05b317553785: Found 0 pods out of 5 Mar 18 21:09:54.981: INFO: Pod name wrapped-volume-race-427740af-69a5-47c6-85e0-05b317553785: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-427740af-69a5-47c6-85e0-05b317553785 in namespace emptydir-wrapper-5406, will wait for the garbage collector to delete the pods Mar 18 21:10:09.114: INFO: Deleting ReplicationController wrapped-volume-race-427740af-69a5-47c6-85e0-05b317553785 took: 5.184635ms Mar 18 21:10:09.415: INFO: Terminating ReplicationController wrapped-volume-race-427740af-69a5-47c6-85e0-05b317553785 pods took: 300.277297ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:10:20.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5406" for this suite. • [SLOW TEST:90.783 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":10,"skipped":80,"failed":0} SSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:10:20.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4145 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4145;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4145 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4145;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4145.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4145.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4145.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4145.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4145.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4145.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4145.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4145.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4145.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4145.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4145.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.198.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.198.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.198.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.198.146_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4145 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4145;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4145 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4145;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4145.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4145.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4145.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4145.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4145.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4145.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4145.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4145.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4145.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4145.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4145.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4145.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.198.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.198.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.198.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.198.146_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 21:10:26.662: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.667: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.685: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.697: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.703: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.729: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.739: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.853: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.871: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.877: INFO: Unable to read jessie_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.883: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.888: INFO: Unable to read jessie_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.900: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.906: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.913: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:26.972: INFO: Lookups using dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4145 wheezy_tcp@dns-test-service.dns-4145 wheezy_udp@dns-test-service.dns-4145.svc wheezy_tcp@dns-test-service.dns-4145.svc wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4145 jessie_tcp@dns-test-service.dns-4145 jessie_udp@dns-test-service.dns-4145.svc jessie_tcp@dns-test-service.dns-4145.svc jessie_udp@_http._tcp.dns-test-service.dns-4145.svc jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc] Mar 18 21:10:32.115: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.119: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.122: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.125: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.128: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.131: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.134: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.136: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.156: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.158: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.160: INFO: Unable to read jessie_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.163: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.166: INFO: Unable to read jessie_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.168: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.171: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.173: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:32.191: INFO: Lookups using dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4145 wheezy_tcp@dns-test-service.dns-4145 wheezy_udp@dns-test-service.dns-4145.svc wheezy_tcp@dns-test-service.dns-4145.svc wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4145 jessie_tcp@dns-test-service.dns-4145 jessie_udp@dns-test-service.dns-4145.svc jessie_tcp@dns-test-service.dns-4145.svc jessie_udp@_http._tcp.dns-test-service.dns-4145.svc jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc] Mar 18 21:10:36.978: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:36.981: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:36.985: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:36.987: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:36.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:36.992: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:36.995: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:36.998: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:37.021: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:37.024: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:37.027: INFO: Unable to read jessie_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:37.031: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:37.034: INFO: Unable to read jessie_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:37.037: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:37.040: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:37.043: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:37.062: INFO: Lookups using dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4145 wheezy_tcp@dns-test-service.dns-4145 wheezy_udp@dns-test-service.dns-4145.svc wheezy_tcp@dns-test-service.dns-4145.svc wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4145 jessie_tcp@dns-test-service.dns-4145 jessie_udp@dns-test-service.dns-4145.svc jessie_tcp@dns-test-service.dns-4145.svc jessie_udp@_http._tcp.dns-test-service.dns-4145.svc jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc] Mar 18 21:10:41.977: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:41.981: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:41.984: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:41.988: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:41.992: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:41.996: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:41.999: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.003: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.032: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.036: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.039: INFO: Unable to read jessie_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.041: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.044: INFO: Unable to read jessie_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.046: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.049: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.052: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:42.068: INFO: Lookups using dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4145 wheezy_tcp@dns-test-service.dns-4145 wheezy_udp@dns-test-service.dns-4145.svc wheezy_tcp@dns-test-service.dns-4145.svc wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4145 jessie_tcp@dns-test-service.dns-4145 jessie_udp@dns-test-service.dns-4145.svc jessie_tcp@dns-test-service.dns-4145.svc jessie_udp@_http._tcp.dns-test-service.dns-4145.svc jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc] Mar 18 21:10:46.976: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:46.980: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:46.984: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:46.987: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:46.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:46.993: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:46.995: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:46.998: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:47.016: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:47.020: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:47.023: INFO: Unable to read jessie_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:47.026: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:47.029: INFO: Unable to read jessie_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:47.031: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:47.034: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:47.036: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:47.051: INFO: Lookups using dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4145 wheezy_tcp@dns-test-service.dns-4145 wheezy_udp@dns-test-service.dns-4145.svc wheezy_tcp@dns-test-service.dns-4145.svc wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4145 jessie_tcp@dns-test-service.dns-4145 jessie_udp@dns-test-service.dns-4145.svc jessie_tcp@dns-test-service.dns-4145.svc jessie_udp@_http._tcp.dns-test-service.dns-4145.svc jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc] Mar 18 21:10:51.976: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:51.980: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:51.983: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:51.986: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:51.989: INFO: Unable to read wheezy_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:51.992: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:51.996: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:51.999: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:52.022: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:52.024: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:52.027: INFO: Unable to read jessie_udp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:52.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145 from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:52.031: INFO: Unable to read jessie_udp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:52.034: INFO: Unable to read jessie_tcp@dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:52.036: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:52.039: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc from pod dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d: the server could not find the requested resource (get pods dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d) Mar 18 21:10:52.056: INFO: Lookups using dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4145 wheezy_tcp@dns-test-service.dns-4145 wheezy_udp@dns-test-service.dns-4145.svc wheezy_tcp@dns-test-service.dns-4145.svc wheezy_udp@_http._tcp.dns-test-service.dns-4145.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4145.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4145 jessie_tcp@dns-test-service.dns-4145 jessie_udp@dns-test-service.dns-4145.svc jessie_tcp@dns-test-service.dns-4145.svc jessie_udp@_http._tcp.dns-test-service.dns-4145.svc jessie_tcp@_http._tcp.dns-test-service.dns-4145.svc] Mar 18 21:10:57.073: INFO: DNS probes using dns-4145/dns-test-df18dd22-dfc9-49e8-a609-7b6c402fc61d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:10:57.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4145" for this suite. • [SLOW TEST:37.365 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":11,"skipped":83,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:10:57.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:10:57.895: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10c96580-b22a-40ed-8cc4-aab849816d0f" in namespace "projected-9286" to be "success or failure" Mar 18 21:10:57.901: INFO: Pod "downwardapi-volume-10c96580-b22a-40ed-8cc4-aab849816d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227953ms Mar 18 21:10:59.919: INFO: Pod "downwardapi-volume-10c96580-b22a-40ed-8cc4-aab849816d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023873319s Mar 18 21:11:01.934: INFO: Pod "downwardapi-volume-10c96580-b22a-40ed-8cc4-aab849816d0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039319626s STEP: Saw pod success Mar 18 21:11:01.934: INFO: Pod "downwardapi-volume-10c96580-b22a-40ed-8cc4-aab849816d0f" satisfied condition "success or failure" Mar 18 21:11:01.938: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-10c96580-b22a-40ed-8cc4-aab849816d0f container client-container: STEP: delete the pod Mar 18 21:11:01.967: INFO: Waiting for pod downwardapi-volume-10c96580-b22a-40ed-8cc4-aab849816d0f to disappear Mar 18 21:11:02.015: INFO: Pod downwardapi-volume-10c96580-b22a-40ed-8cc4-aab849816d0f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:11:02.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9286" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":85,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:11:02.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 18 21:11:02.193: INFO: Waiting up to 5m0s for pod "var-expansion-9ed04842-7fd2-4bce-8435-9470f869d431" in namespace "var-expansion-7120" to be "success or failure" Mar 18 21:11:02.200: INFO: Pod "var-expansion-9ed04842-7fd2-4bce-8435-9470f869d431": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591139ms Mar 18 21:11:04.204: INFO: Pod "var-expansion-9ed04842-7fd2-4bce-8435-9470f869d431": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010667609s Mar 18 21:11:06.209: INFO: Pod "var-expansion-9ed04842-7fd2-4bce-8435-9470f869d431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015356068s STEP: Saw pod success Mar 18 21:11:06.209: INFO: Pod "var-expansion-9ed04842-7fd2-4bce-8435-9470f869d431" satisfied condition "success or failure" Mar 18 21:11:06.212: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-9ed04842-7fd2-4bce-8435-9470f869d431 container dapi-container: STEP: delete the pod Mar 18 21:11:06.270: INFO: Waiting for pod var-expansion-9ed04842-7fd2-4bce-8435-9470f869d431 to disappear Mar 18 21:11:06.291: INFO: Pod var-expansion-9ed04842-7fd2-4bce-8435-9470f869d431 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:11:06.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7120" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":91,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:11:06.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 18 21:11:06.385: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:11:13.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7056" for this suite. • [SLOW TEST:6.994 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":14,"skipped":93,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:11:13.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 18 21:11:13.338: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 18 21:11:18.342: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:11:18.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2917" for this suite. • [SLOW TEST:5.168 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":15,"skipped":95,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:11:18.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-pr72 STEP: Creating a pod to test atomic-volume-subpath Mar 18 21:11:18.878: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pr72" in namespace "subpath-4904" to be "success or failure" Mar 18 21:11:18.940: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Pending", Reason="", readiness=false. Elapsed: 62.848221ms Mar 18 21:11:20.945: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067171188s Mar 18 21:11:22.949: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 4.071502595s Mar 18 21:11:24.954: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 6.076138048s Mar 18 21:11:26.958: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 8.080525689s Mar 18 21:11:28.963: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 10.084893023s Mar 18 21:11:30.966: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 12.088792685s Mar 18 21:11:32.970: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 14.092553013s Mar 18 21:11:34.975: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 16.097110306s Mar 18 21:11:36.979: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 18.101023319s Mar 18 21:11:38.983: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 20.105309503s Mar 18 21:11:40.987: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Running", Reason="", readiness=true. Elapsed: 22.109027686s Mar 18 21:11:42.991: INFO: Pod "pod-subpath-test-secret-pr72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.113083392s STEP: Saw pod success Mar 18 21:11:42.991: INFO: Pod "pod-subpath-test-secret-pr72" satisfied condition "success or failure" Mar 18 21:11:42.994: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-pr72 container test-container-subpath-secret-pr72: STEP: delete the pod Mar 18 21:11:43.026: INFO: Waiting for pod pod-subpath-test-secret-pr72 to disappear Mar 18 21:11:43.040: INFO: Pod pod-subpath-test-secret-pr72 no longer exists STEP: Deleting pod pod-subpath-test-secret-pr72 Mar 18 21:11:43.040: INFO: Deleting pod "pod-subpath-test-secret-pr72" in namespace "subpath-4904" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:11:43.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4904" for this suite. • [SLOW TEST:24.591 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":16,"skipped":96,"failed":0} [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:11:43.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:11:43.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3928" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":17,"skipped":96,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:11:43.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:11:43.856: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:11:45.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162703, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162703, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162703, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162703, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:11:48.935: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:11:49.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3600" for this suite. STEP: Destroying namespace "webhook-3600-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.049 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":18,"skipped":97,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:11:49.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-b65e0a7c-2bff-40d9-be49-b9a4740c407f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:11:53.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7961" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:11:53.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-fb62d8d0-5b34-4f06-a41c-431324ea96da STEP: Creating a pod to test consume secrets Mar 18 21:11:53.408: INFO: Waiting up to 5m0s for pod "pod-secrets-51befff0-ec4f-4fdb-9715-b46fd9aef905" in namespace "secrets-5924" to be "success or failure" Mar 18 21:11:53.413: INFO: Pod "pod-secrets-51befff0-ec4f-4fdb-9715-b46fd9aef905": Phase="Pending", Reason="", readiness=false. Elapsed: 4.734349ms Mar 18 21:11:55.417: INFO: Pod "pod-secrets-51befff0-ec4f-4fdb-9715-b46fd9aef905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008639488s Mar 18 21:11:57.420: INFO: Pod "pod-secrets-51befff0-ec4f-4fdb-9715-b46fd9aef905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012484753s STEP: Saw pod success Mar 18 21:11:57.420: INFO: Pod "pod-secrets-51befff0-ec4f-4fdb-9715-b46fd9aef905" satisfied condition "success or failure" Mar 18 21:11:57.423: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-51befff0-ec4f-4fdb-9715-b46fd9aef905 container secret-volume-test: STEP: delete the pod Mar 18 21:11:57.457: INFO: Waiting for pod pod-secrets-51befff0-ec4f-4fdb-9715-b46fd9aef905 to disappear Mar 18 21:11:57.471: INFO: Pod pod-secrets-51befff0-ec4f-4fdb-9715-b46fd9aef905 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:11:57.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5924" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":232,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:11:57.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2003 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 21:11:57.551: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 21:12:19.645: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.191:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2003 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:12:19.645: INFO: >>> kubeConfig: /root/.kube/config I0318 21:12:19.686552 6 log.go:172] (0xc002a15ce0) (0xc000e7a960) Create stream I0318 21:12:19.686589 6 log.go:172] (0xc002a15ce0) (0xc000e7a960) Stream added, broadcasting: 1 I0318 21:12:19.688425 6 log.go:172] (0xc002a15ce0) Reply frame received for 1 I0318 21:12:19.688460 6 log.go:172] (0xc002a15ce0) (0xc002a0a780) Create stream I0318 21:12:19.688472 6 log.go:172] (0xc002a15ce0) (0xc002a0a780) Stream added, broadcasting: 3 I0318 21:12:19.689624 6 log.go:172] (0xc002a15ce0) Reply frame received for 3 I0318 21:12:19.689670 6 log.go:172] (0xc002a15ce0) (0xc0028b2780) Create stream I0318 21:12:19.689690 6 log.go:172] (0xc002a15ce0) (0xc0028b2780) Stream added, broadcasting: 5 I0318 21:12:19.690643 6 log.go:172] (0xc002a15ce0) Reply frame received for 5 I0318 21:12:19.770333 6 log.go:172] (0xc002a15ce0) Data frame received for 3 I0318 21:12:19.770367 6 log.go:172] (0xc002a0a780) (3) Data frame handling I0318 21:12:19.770417 6 log.go:172] (0xc002a0a780) (3) Data frame sent I0318 21:12:19.770432 6 log.go:172] (0xc002a15ce0) Data frame received for 3 I0318 21:12:19.770439 6 log.go:172] (0xc002a0a780) (3) Data frame handling I0318 21:12:19.771341 6 log.go:172] (0xc002a15ce0) Data frame received for 5 I0318 21:12:19.771379 6 log.go:172] (0xc0028b2780) (5) Data frame handling I0318 21:12:19.773816 6 log.go:172] (0xc002a15ce0) Data frame received for 1 I0318 21:12:19.773860 6 log.go:172] (0xc000e7a960) (1) Data frame handling I0318 21:12:19.773885 6 log.go:172] (0xc000e7a960) (1) Data frame sent I0318 21:12:19.773904 6 log.go:172] (0xc002a15ce0) (0xc000e7a960) Stream removed, broadcasting: 1 I0318 21:12:19.773925 6 log.go:172] (0xc002a15ce0) Go away received I0318 21:12:19.774432 6 log.go:172] (0xc002a15ce0) (0xc000e7a960) Stream removed, broadcasting: 1 I0318 21:12:19.774457 6 log.go:172] (0xc002a15ce0) (0xc002a0a780) Stream removed, broadcasting: 3 I0318 21:12:19.774469 6 log.go:172] (0xc002a15ce0) (0xc0028b2780) Stream removed, broadcasting: 5 Mar 18 21:12:19.774: INFO: Found all expected endpoints: [netserver-0] Mar 18 21:12:19.778: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.216:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2003 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:12:19.778: INFO: >>> kubeConfig: /root/.kube/config I0318 21:12:19.809642 6 log.go:172] (0xc0021933f0) (0xc000e7ac80) Create stream I0318 21:12:19.809684 6 log.go:172] (0xc0021933f0) (0xc000e7ac80) Stream added, broadcasting: 1 I0318 21:12:19.812135 6 log.go:172] (0xc0021933f0) Reply frame received for 1 I0318 21:12:19.812159 6 log.go:172] (0xc0021933f0) (0xc0023cae60) Create stream I0318 21:12:19.812171 6 log.go:172] (0xc0021933f0) (0xc0023cae60) Stream added, broadcasting: 3 I0318 21:12:19.813039 6 log.go:172] (0xc0021933f0) Reply frame received for 3 I0318 21:12:19.813077 6 log.go:172] (0xc0021933f0) (0xc002a0a820) Create stream I0318 21:12:19.813091 6 log.go:172] (0xc0021933f0) (0xc002a0a820) Stream added, broadcasting: 5 I0318 21:12:19.814246 6 log.go:172] (0xc0021933f0) Reply frame received for 5 I0318 21:12:19.886369 6 log.go:172] (0xc0021933f0) Data frame received for 5 I0318 21:12:19.886411 6 log.go:172] (0xc002a0a820) (5) Data frame handling I0318 21:12:19.886446 6 log.go:172] (0xc0021933f0) Data frame received for 3 I0318 21:12:19.886544 6 log.go:172] (0xc0023cae60) (3) Data frame handling I0318 21:12:19.886639 6 log.go:172] (0xc0023cae60) (3) Data frame sent I0318 21:12:19.886660 6 log.go:172] (0xc0021933f0) Data frame received for 3 I0318 21:12:19.886670 6 log.go:172] (0xc0023cae60) (3) Data frame handling I0318 21:12:19.887534 6 log.go:172] (0xc0021933f0) Data frame received for 1 I0318 21:12:19.887557 6 log.go:172] (0xc000e7ac80) (1) Data frame handling I0318 21:12:19.887577 6 log.go:172] (0xc000e7ac80) (1) Data frame sent I0318 21:12:19.887598 6 log.go:172] (0xc0021933f0) (0xc000e7ac80) Stream removed, broadcasting: 1 I0318 21:12:19.887613 6 log.go:172] (0xc0021933f0) Go away received I0318 21:12:19.887771 6 log.go:172] (0xc0021933f0) (0xc000e7ac80) Stream removed, broadcasting: 1 I0318 21:12:19.887799 6 log.go:172] (0xc0021933f0) (0xc0023cae60) Stream removed, broadcasting: 3 I0318 21:12:19.887816 6 log.go:172] (0xc0021933f0) (0xc002a0a820) Stream removed, broadcasting: 5 Mar 18 21:12:19.887: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:12:19.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2003" for this suite. • [SLOW TEST:22.417 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:12:19.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5149 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5149 STEP: Creating statefulset with conflicting port in namespace statefulset-5149 STEP: Waiting until pod test-pod will start running in namespace statefulset-5149 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5149 Mar 18 21:12:26.054: INFO: Observed stateful pod in namespace: statefulset-5149, name: ss-0, uid: b947b795-ac6d-44ee-8f3d-217f35771c06, status phase: Pending. Waiting for statefulset controller to delete. Mar 18 21:12:26.488: INFO: Observed stateful pod in namespace: statefulset-5149, name: ss-0, uid: b947b795-ac6d-44ee-8f3d-217f35771c06, status phase: Failed. Waiting for statefulset controller to delete. Mar 18 21:12:26.517: INFO: Observed stateful pod in namespace: statefulset-5149, name: ss-0, uid: b947b795-ac6d-44ee-8f3d-217f35771c06, status phase: Failed. Waiting for statefulset controller to delete. Mar 18 21:12:26.596: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5149 STEP: Removing pod with conflicting port in namespace statefulset-5149 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5149 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 18 21:12:30.667: INFO: Deleting all statefulset in ns statefulset-5149 Mar 18 21:12:30.671: INFO: Scaling statefulset ss to 0 Mar 18 21:12:40.687: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 21:12:40.690: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:12:40.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5149" for this suite. • [SLOW TEST:20.910 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":22,"skipped":251,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:12:40.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-55806e69-b917-49c0-a291-312bd821bb21 STEP: Creating a pod to test consume secrets Mar 18 21:12:40.881: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-33c4c844-ebeb-4d95-bf89-fbde869adf6e" in namespace "projected-232" to be "success or failure" Mar 18 21:12:40.891: INFO: Pod "pod-projected-secrets-33c4c844-ebeb-4d95-bf89-fbde869adf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.210954ms Mar 18 21:12:42.895: INFO: Pod "pod-projected-secrets-33c4c844-ebeb-4d95-bf89-fbde869adf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013909371s Mar 18 21:12:44.899: INFO: Pod "pod-projected-secrets-33c4c844-ebeb-4d95-bf89-fbde869adf6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017959686s STEP: Saw pod success Mar 18 21:12:44.899: INFO: Pod "pod-projected-secrets-33c4c844-ebeb-4d95-bf89-fbde869adf6e" satisfied condition "success or failure" Mar 18 21:12:44.902: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-33c4c844-ebeb-4d95-bf89-fbde869adf6e container projected-secret-volume-test: STEP: delete the pod Mar 18 21:12:44.917: INFO: Waiting for pod pod-projected-secrets-33c4c844-ebeb-4d95-bf89-fbde869adf6e to disappear Mar 18 21:12:44.921: INFO: Pod pod-projected-secrets-33c4c844-ebeb-4d95-bf89-fbde869adf6e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:12:44.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-232" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:12:44.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Mar 18 21:12:45.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2717' Mar 18 21:12:45.307: INFO: stderr: "" Mar 18 21:12:45.307: INFO: stdout: "pod/pause created\n" Mar 18 21:12:45.307: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 18 21:12:45.307: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2717" to be "running and ready" Mar 18 21:12:45.311: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152896ms Mar 18 21:12:47.316: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008944489s Mar 18 21:12:49.320: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.013293298s Mar 18 21:12:49.320: INFO: Pod "pause" satisfied condition "running and ready" Mar 18 21:12:49.320: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 18 21:12:49.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2717' Mar 18 21:12:49.416: INFO: stderr: "" Mar 18 21:12:49.416: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 18 21:12:49.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2717' Mar 18 21:12:49.519: INFO: stderr: "" Mar 18 21:12:49.519: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 18 21:12:49.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2717' Mar 18 21:12:49.606: INFO: stderr: "" Mar 18 21:12:49.606: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 18 21:12:49.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2717' Mar 18 21:12:49.693: INFO: stderr: "" Mar 18 21:12:49.693: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Mar 18 21:12:49.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2717' Mar 18 21:12:49.807: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 21:12:49.807: INFO: stdout: "pod \"pause\" force deleted\n" Mar 18 21:12:49.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2717' Mar 18 21:12:49.908: INFO: stderr: "No resources found in kubectl-2717 namespace.\n" Mar 18 21:12:49.908: INFO: stdout: "" Mar 18 21:12:49.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2717 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 21:12:50.115: INFO: stderr: "" Mar 18 21:12:50.115: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:12:50.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2717" for this suite. • [SLOW TEST:5.277 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1379 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":24,"skipped":293,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:12:50.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:12:50.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d22c7d6c-e26f-4cb6-97fa-c642acea04d3" in namespace "projected-8744" to be "success or failure" Mar 18 21:12:50.348: INFO: Pod "downwardapi-volume-d22c7d6c-e26f-4cb6-97fa-c642acea04d3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.132374ms Mar 18 21:12:52.353: INFO: Pod "downwardapi-volume-d22c7d6c-e26f-4cb6-97fa-c642acea04d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020253842s Mar 18 21:12:54.357: INFO: Pod "downwardapi-volume-d22c7d6c-e26f-4cb6-97fa-c642acea04d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02485251s STEP: Saw pod success Mar 18 21:12:54.357: INFO: Pod "downwardapi-volume-d22c7d6c-e26f-4cb6-97fa-c642acea04d3" satisfied condition "success or failure" Mar 18 21:12:54.360: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d22c7d6c-e26f-4cb6-97fa-c642acea04d3 container client-container: STEP: delete the pod Mar 18 21:12:54.392: INFO: Waiting for pod downwardapi-volume-d22c7d6c-e26f-4cb6-97fa-c642acea04d3 to disappear Mar 18 21:12:54.414: INFO: Pod downwardapi-volume-d22c7d6c-e26f-4cb6-97fa-c642acea04d3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:12:54.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8744" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:12:54.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 18 21:12:59.040: INFO: Successfully updated pod "labelsupdate9299308d-2af1-42b4-afaa-eba69548b622" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:13:01.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6827" for this suite. • [SLOW TEST:6.662 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:13:01.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3132 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 21:13:01.164: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 21:13:23.289: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.223:8080/dial?request=hostname&protocol=http&host=10.244.1.194&port=8080&tries=1'] Namespace:pod-network-test-3132 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:13:23.289: INFO: >>> kubeConfig: /root/.kube/config I0318 21:13:23.329304 6 log.go:172] (0xc0026536b0) (0xc0010f3ae0) Create stream I0318 21:13:23.329341 6 log.go:172] (0xc0026536b0) (0xc0010f3ae0) Stream added, broadcasting: 1 I0318 21:13:23.331037 6 log.go:172] (0xc0026536b0) Reply frame received for 1 I0318 21:13:23.331079 6 log.go:172] (0xc0026536b0) (0xc0023cbe00) Create stream I0318 21:13:23.331094 6 log.go:172] (0xc0026536b0) (0xc0023cbe00) Stream added, broadcasting: 3 I0318 21:13:23.332148 6 log.go:172] (0xc0026536b0) Reply frame received for 3 I0318 21:13:23.332176 6 log.go:172] (0xc0026536b0) (0xc0010f3cc0) Create stream I0318 21:13:23.332190 6 log.go:172] (0xc0026536b0) (0xc0010f3cc0) Stream added, broadcasting: 5 I0318 21:13:23.333165 6 log.go:172] (0xc0026536b0) Reply frame received for 5 I0318 21:13:23.413641 6 log.go:172] (0xc0026536b0) Data frame received for 3 I0318 21:13:23.413684 6 log.go:172] (0xc0023cbe00) (3) Data frame handling I0318 21:13:23.413716 6 log.go:172] (0xc0023cbe00) (3) Data frame sent I0318 21:13:23.413831 6 log.go:172] (0xc0026536b0) Data frame received for 5 I0318 21:13:23.413847 6 log.go:172] (0xc0010f3cc0) (5) Data frame handling I0318 21:13:23.413926 6 log.go:172] (0xc0026536b0) Data frame received for 3 I0318 21:13:23.413936 6 log.go:172] (0xc0023cbe00) (3) Data frame handling I0318 21:13:23.415478 6 log.go:172] (0xc0026536b0) Data frame received for 1 I0318 21:13:23.415499 6 log.go:172] (0xc0010f3ae0) (1) Data frame handling I0318 21:13:23.415517 6 log.go:172] (0xc0010f3ae0) (1) Data frame sent I0318 21:13:23.415537 6 log.go:172] (0xc0026536b0) (0xc0010f3ae0) Stream removed, broadcasting: 1 I0318 21:13:23.415566 6 log.go:172] (0xc0026536b0) Go away received I0318 21:13:23.415672 6 log.go:172] (0xc0026536b0) (0xc0010f3ae0) Stream removed, broadcasting: 1 I0318 21:13:23.415689 6 log.go:172] (0xc0026536b0) (0xc0023cbe00) Stream removed, broadcasting: 3 I0318 21:13:23.415700 6 log.go:172] (0xc0026536b0) (0xc0010f3cc0) Stream removed, broadcasting: 5 Mar 18 21:13:23.415: INFO: Waiting for responses: map[] Mar 18 21:13:23.419: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.223:8080/dial?request=hostname&protocol=http&host=10.244.2.222&port=8080&tries=1'] Namespace:pod-network-test-3132 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:13:23.419: INFO: >>> kubeConfig: /root/.kube/config I0318 21:13:23.446774 6 log.go:172] (0xc000bbc580) (0xc001424500) Create stream I0318 21:13:23.446802 6 log.go:172] (0xc000bbc580) (0xc001424500) Stream added, broadcasting: 1 I0318 21:13:23.448673 6 log.go:172] (0xc000bbc580) Reply frame received for 1 I0318 21:13:23.448707 6 log.go:172] (0xc000bbc580) (0xc001313a40) Create stream I0318 21:13:23.448718 6 log.go:172] (0xc000bbc580) (0xc001313a40) Stream added, broadcasting: 3 I0318 21:13:23.449959 6 log.go:172] (0xc000bbc580) Reply frame received for 3 I0318 21:13:23.450011 6 log.go:172] (0xc000bbc580) (0xc0010f3d60) Create stream I0318 21:13:23.450025 6 log.go:172] (0xc000bbc580) (0xc0010f3d60) Stream added, broadcasting: 5 I0318 21:13:23.450895 6 log.go:172] (0xc000bbc580) Reply frame received for 5 I0318 21:13:23.527167 6 log.go:172] (0xc000bbc580) Data frame received for 3 I0318 21:13:23.527211 6 log.go:172] (0xc001313a40) (3) Data frame handling I0318 21:13:23.527242 6 log.go:172] (0xc001313a40) (3) Data frame sent I0318 21:13:23.527459 6 log.go:172] (0xc000bbc580) Data frame received for 5 I0318 21:13:23.527491 6 log.go:172] (0xc0010f3d60) (5) Data frame handling I0318 21:13:23.527510 6 log.go:172] (0xc000bbc580) Data frame received for 3 I0318 21:13:23.527517 6 log.go:172] (0xc001313a40) (3) Data frame handling I0318 21:13:23.528935 6 log.go:172] (0xc000bbc580) Data frame received for 1 I0318 21:13:23.528951 6 log.go:172] (0xc001424500) (1) Data frame handling I0318 21:13:23.528961 6 log.go:172] (0xc001424500) (1) Data frame sent I0318 21:13:23.528972 6 log.go:172] (0xc000bbc580) (0xc001424500) Stream removed, broadcasting: 1 I0318 21:13:23.529051 6 log.go:172] (0xc000bbc580) Go away received I0318 21:13:23.529348 6 log.go:172] (0xc000bbc580) (0xc001424500) Stream removed, broadcasting: 1 I0318 21:13:23.529384 6 log.go:172] (0xc000bbc580) (0xc001313a40) Stream removed, broadcasting: 3 I0318 21:13:23.529410 6 log.go:172] (0xc000bbc580) (0xc0010f3d60) Stream removed, broadcasting: 5 Mar 18 21:13:23.529: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:13:23.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3132" for this suite. • [SLOW TEST:22.454 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":373,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:13:23.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 18 21:13:23.687: INFO: Waiting up to 5m0s for pod "pod-870ab9d4-9298-401e-bcf0-680dfb85f963" in namespace "emptydir-7333" to be "success or failure" Mar 18 21:13:23.706: INFO: Pod "pod-870ab9d4-9298-401e-bcf0-680dfb85f963": Phase="Pending", Reason="", readiness=false. Elapsed: 18.810722ms Mar 18 21:13:25.710: INFO: Pod "pod-870ab9d4-9298-401e-bcf0-680dfb85f963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023160669s Mar 18 21:13:27.714: INFO: Pod "pod-870ab9d4-9298-401e-bcf0-680dfb85f963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027621971s STEP: Saw pod success Mar 18 21:13:27.714: INFO: Pod "pod-870ab9d4-9298-401e-bcf0-680dfb85f963" satisfied condition "success or failure" Mar 18 21:13:27.718: INFO: Trying to get logs from node jerma-worker pod pod-870ab9d4-9298-401e-bcf0-680dfb85f963 container test-container: STEP: delete the pod Mar 18 21:13:27.749: INFO: Waiting for pod pod-870ab9d4-9298-401e-bcf0-680dfb85f963 to disappear Mar 18 21:13:27.780: INFO: Pod pod-870ab9d4-9298-401e-bcf0-680dfb85f963 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:13:27.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7333" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":384,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:13:27.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 18 21:13:27.843: INFO: Waiting up to 5m0s for pod "downward-api-b537274c-987d-4cc6-ab4c-8411abd28931" in namespace "downward-api-4286" to be "success or failure" Mar 18 21:13:27.852: INFO: Pod "downward-api-b537274c-987d-4cc6-ab4c-8411abd28931": Phase="Pending", Reason="", readiness=false. Elapsed: 9.414082ms Mar 18 21:13:29.869: INFO: Pod "downward-api-b537274c-987d-4cc6-ab4c-8411abd28931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026375526s Mar 18 21:13:31.873: INFO: Pod "downward-api-b537274c-987d-4cc6-ab4c-8411abd28931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03010788s STEP: Saw pod success Mar 18 21:13:31.873: INFO: Pod "downward-api-b537274c-987d-4cc6-ab4c-8411abd28931" satisfied condition "success or failure" Mar 18 21:13:31.875: INFO: Trying to get logs from node jerma-worker2 pod downward-api-b537274c-987d-4cc6-ab4c-8411abd28931 container dapi-container: STEP: delete the pod Mar 18 21:13:31.937: INFO: Waiting for pod downward-api-b537274c-987d-4cc6-ab4c-8411abd28931 to disappear Mar 18 21:13:31.946: INFO: Pod downward-api-b537274c-987d-4cc6-ab4c-8411abd28931 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:13:31.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4286" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:13:31.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9527 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-9527 Mar 18 21:13:32.024: INFO: Found 0 stateful pods, waiting for 1 Mar 18 21:13:42.028: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 18 21:13:42.049: INFO: Deleting all statefulset in ns statefulset-9527 Mar 18 21:13:42.054: INFO: Scaling statefulset ss to 0 Mar 18 21:14:02.124: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 21:14:02.127: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:14:02.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9527" for this suite. • [SLOW TEST:30.197 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":30,"skipped":431,"failed":0} S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:14:02.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-1348 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1348 to expose endpoints map[] Mar 18 21:14:02.264: INFO: Get endpoints failed (3.373862ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 18 21:14:03.267: INFO: successfully validated that service multi-endpoint-test in namespace services-1348 exposes endpoints map[] (1.00718721s elapsed) STEP: Creating pod pod1 in namespace services-1348 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1348 to expose endpoints map[pod1:[100]] Mar 18 21:14:06.323: INFO: successfully validated that service multi-endpoint-test in namespace services-1348 exposes endpoints map[pod1:[100]] (3.048737266s elapsed) STEP: Creating pod pod2 in namespace services-1348 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1348 to expose endpoints map[pod1:[100] pod2:[101]] Mar 18 21:14:09.496: INFO: successfully validated that service multi-endpoint-test in namespace services-1348 exposes endpoints map[pod1:[100] pod2:[101]] (3.168956109s elapsed) STEP: Deleting pod pod1 in namespace services-1348 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1348 to expose endpoints map[pod2:[101]] Mar 18 21:14:10.538: INFO: successfully validated that service multi-endpoint-test in namespace services-1348 exposes endpoints map[pod2:[101]] (1.037272642s elapsed) STEP: Deleting pod pod2 in namespace services-1348 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1348 to expose endpoints map[] Mar 18 21:14:11.570: INFO: successfully validated that service multi-endpoint-test in namespace services-1348 exposes endpoints map[] (1.027814457s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:14:11.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1348" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.472 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":31,"skipped":432,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:14:11.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 18 21:14:11.802: INFO: Waiting up to 5m0s for pod "pod-bae3a4be-a4d6-47f8-867a-46f35c1adc1c" in namespace "emptydir-3625" to be "success or failure" Mar 18 21:14:11.817: INFO: Pod "pod-bae3a4be-a4d6-47f8-867a-46f35c1adc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.149792ms Mar 18 21:14:13.822: INFO: Pod "pod-bae3a4be-a4d6-47f8-867a-46f35c1adc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019575591s Mar 18 21:14:15.826: INFO: Pod "pod-bae3a4be-a4d6-47f8-867a-46f35c1adc1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023776998s STEP: Saw pod success Mar 18 21:14:15.826: INFO: Pod "pod-bae3a4be-a4d6-47f8-867a-46f35c1adc1c" satisfied condition "success or failure" Mar 18 21:14:15.829: INFO: Trying to get logs from node jerma-worker2 pod pod-bae3a4be-a4d6-47f8-867a-46f35c1adc1c container test-container: STEP: delete the pod Mar 18 21:14:15.863: INFO: Waiting for pod pod-bae3a4be-a4d6-47f8-867a-46f35c1adc1c to disappear Mar 18 21:14:15.878: INFO: Pod pod-bae3a4be-a4d6-47f8-867a-46f35c1adc1c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:14:15.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3625" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":439,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:14:15.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 18 21:14:15.962: INFO: PodSpec: initContainers in spec.initContainers Mar 18 21:15:01.793: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0cfd3ebd-5dc3-4cc7-8de4-4fba7362dc92", GenerateName:"", Namespace:"init-container-7154", SelfLink:"/api/v1/namespaces/init-container-7154/pods/pod-init-0cfd3ebd-5dc3-4cc7-8de4-4fba7362dc92", UID:"ab8670d5-a0f3-4728-95a8-4f946a43dabe", ResourceVersion:"847124", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720162855, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"962527682"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4mvv6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002691280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4mvv6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4mvv6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4mvv6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e30a48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026c9620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e30b90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e30bb0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001e30bb8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e30bbc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162856, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162856, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162856, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162855, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.228", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.228"}}, StartTime:(*v1.Time)(0xc0014923c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001492400), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f0bb20)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c2952398d6a556a76b3d002371d9ad364f90e1b93f9447894132988bcdc01678", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001492420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0014923e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001e30c3f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:15:01.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7154" for this suite. • [SLOW TEST:46.030 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":33,"skipped":440,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:15:01.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:15:02.055: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 18 21:15:07.059: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 18 21:15:07.059: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 18 21:15:09.063: INFO: Creating deployment "test-rollover-deployment" Mar 18 21:15:09.074: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 18 21:15:11.081: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 18 21:15:11.088: INFO: Ensure that both replica sets have 1 created replica Mar 18 21:15:11.094: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 18 21:15:11.100: INFO: Updating deployment test-rollover-deployment Mar 18 21:15:11.100: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 18 21:15:13.177: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 18 21:15:13.184: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 18 21:15:13.188: INFO: all replica sets need to contain the pod-template-hash label Mar 18 21:15:13.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162911, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 21:15:15.196: INFO: all replica sets need to contain the pod-template-hash label Mar 18 21:15:15.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162914, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 21:15:17.194: INFO: all replica sets need to contain the pod-template-hash label Mar 18 21:15:17.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162914, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 21:15:19.196: INFO: all replica sets need to contain the pod-template-hash label Mar 18 21:15:19.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162914, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 21:15:21.195: INFO: all replica sets need to contain the pod-template-hash label Mar 18 21:15:21.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162914, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 21:15:23.196: INFO: all replica sets need to contain the pod-template-hash label Mar 18 21:15:23.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162914, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720162909, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 21:15:25.197: INFO: Mar 18 21:15:25.197: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 18 21:15:25.202: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3575 /apis/apps/v1/namespaces/deployment-3575/deployments/test-rollover-deployment f8a534ad-ad73-489a-883a-812f2d418e75 847285 2 2020-03-18 21:15:09 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00052c5f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-18 21:15:09 +0000 UTC,LastTransitionTime:2020-03-18 21:15:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-18 21:15:24 +0000 UTC,LastTransitionTime:2020-03-18 21:15:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 18 21:15:25.204: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-3575 /apis/apps/v1/namespaces/deployment-3575/replicasets/test-rollover-deployment-574d6dfbff fb0a54d2-b7d8-4905-b096-d6b930230ddf 847272 2 2020-03-18 21:15:11 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment f8a534ad-ad73-489a-883a-812f2d418e75 0xc000dcd4e7 0xc000dcd4e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000dcd678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 18 21:15:25.204: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 18 21:15:25.204: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3575 /apis/apps/v1/namespaces/deployment-3575/replicasets/test-rollover-controller bf59696c-bcff-4d17-bb5f-5d3d83c00489 847283 2 2020-03-18 21:15:02 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment f8a534ad-ad73-489a-883a-812f2d418e75 0xc000dcd417 0xc000dcd418}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000dcd478 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 18 21:15:25.204: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-3575 /apis/apps/v1/namespaces/deployment-3575/replicasets/test-rollover-deployment-f6c94f66c 06d9222a-9c1e-4a0b-82d5-d6372764a339 847223 2 2020-03-18 21:15:09 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment f8a534ad-ad73-489a-883a-812f2d418e75 0xc000dcd770 0xc000dcd771}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000dcd828 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 18 21:15:25.255: INFO: Pod "test-rollover-deployment-574d6dfbff-44l57" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-44l57 test-rollover-deployment-574d6dfbff- deployment-3575 /api/v1/namespaces/deployment-3575/pods/test-rollover-deployment-574d6dfbff-44l57 8ee8c6d9-1dc7-44d4-bc9d-c36d8b87f4bb 847242 0 2020-03-18 21:15:11 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff fb0a54d2-b7d8-4905-b096-d6b930230ddf 0xc000ab26c7 0xc000ab26c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nj56j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nj56j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nj56j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:15:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:15:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:15:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:15:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.199,StartTime:2020-03-18 21:15:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:15:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9ff691a1aae08f0d58b92303f8f7886154110c6523557184263893266fcfe713,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.199,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:15:25.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3575" for this suite. • [SLOW TEST:23.345 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":34,"skipped":473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:15:25.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 18 21:15:25.316: INFO: Waiting up to 5m0s for pod "pod-e80f69b2-ed84-4883-87e7-7517ef474a23" in namespace "emptydir-5355" to be "success or failure" Mar 18 21:15:25.320: INFO: Pod "pod-e80f69b2-ed84-4883-87e7-7517ef474a23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.694453ms Mar 18 21:15:27.324: INFO: Pod "pod-e80f69b2-ed84-4883-87e7-7517ef474a23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00770634s Mar 18 21:15:29.345: INFO: Pod "pod-e80f69b2-ed84-4883-87e7-7517ef474a23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028308912s STEP: Saw pod success Mar 18 21:15:29.345: INFO: Pod "pod-e80f69b2-ed84-4883-87e7-7517ef474a23" satisfied condition "success or failure" Mar 18 21:15:29.347: INFO: Trying to get logs from node jerma-worker pod pod-e80f69b2-ed84-4883-87e7-7517ef474a23 container test-container: STEP: delete the pod Mar 18 21:15:29.381: INFO: Waiting for pod pod-e80f69b2-ed84-4883-87e7-7517ef474a23 to disappear Mar 18 21:15:29.386: INFO: Pod pod-e80f69b2-ed84-4883-87e7-7517ef474a23 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:15:29.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5355" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":497,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:15:29.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:15:29.441: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:15:30.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5306" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":36,"skipped":505,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:15:30.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7383 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7383 STEP: creating replication controller externalsvc in namespace services-7383 I0318 21:15:30.252452 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7383, replica count: 2 I0318 21:15:33.302886 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 21:15:36.303204 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 18 21:15:36.386: INFO: Creating new exec pod Mar 18 21:15:40.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7383 execpod9w9bp -- /bin/sh -x -c nslookup nodeport-service' Mar 18 21:15:40.668: INFO: stderr: "I0318 21:15:40.555915 358 log.go:172] (0xc0001054a0) (0xc000661cc0) Create stream\nI0318 21:15:40.555989 358 log.go:172] (0xc0001054a0) (0xc000661cc0) Stream added, broadcasting: 1\nI0318 21:15:40.560482 358 log.go:172] (0xc0001054a0) Reply frame received for 1\nI0318 21:15:40.560642 358 log.go:172] (0xc0001054a0) (0xc000ad4000) Create stream\nI0318 21:15:40.560722 358 log.go:172] (0xc0001054a0) (0xc000ad4000) Stream added, broadcasting: 3\nI0318 21:15:40.562405 358 log.go:172] (0xc0001054a0) Reply frame received for 3\nI0318 21:15:40.562454 358 log.go:172] (0xc0001054a0) (0xc000382000) Create stream\nI0318 21:15:40.562479 358 log.go:172] (0xc0001054a0) (0xc000382000) Stream added, broadcasting: 5\nI0318 21:15:40.563642 358 log.go:172] (0xc0001054a0) Reply frame received for 5\nI0318 21:15:40.652093 358 log.go:172] (0xc0001054a0) Data frame received for 5\nI0318 21:15:40.652122 358 log.go:172] (0xc000382000) (5) Data frame handling\nI0318 21:15:40.652150 358 log.go:172] (0xc000382000) (5) Data frame sent\n+ nslookup nodeport-service\nI0318 21:15:40.660061 358 log.go:172] (0xc0001054a0) Data frame received for 3\nI0318 21:15:40.660090 358 log.go:172] (0xc000ad4000) (3) Data frame handling\nI0318 21:15:40.660136 358 log.go:172] (0xc000ad4000) (3) Data frame sent\nI0318 21:15:40.661033 358 log.go:172] (0xc0001054a0) Data frame received for 3\nI0318 21:15:40.661051 358 log.go:172] (0xc000ad4000) (3) Data frame handling\nI0318 21:15:40.661065 358 log.go:172] (0xc000ad4000) (3) Data frame sent\nI0318 21:15:40.661913 358 log.go:172] (0xc0001054a0) Data frame received for 3\nI0318 21:15:40.661937 358 log.go:172] (0xc0001054a0) Data frame received for 5\nI0318 21:15:40.661955 358 log.go:172] (0xc000382000) (5) Data frame handling\nI0318 21:15:40.661980 358 log.go:172] (0xc000ad4000) (3) Data frame handling\nI0318 21:15:40.663614 358 log.go:172] (0xc0001054a0) Data frame received for 1\nI0318 21:15:40.663629 358 log.go:172] (0xc000661cc0) (1) Data frame handling\nI0318 21:15:40.663639 358 log.go:172] (0xc000661cc0) (1) Data frame sent\nI0318 21:15:40.663695 358 log.go:172] (0xc0001054a0) (0xc000661cc0) Stream removed, broadcasting: 1\nI0318 21:15:40.663784 358 log.go:172] (0xc0001054a0) Go away received\nI0318 21:15:40.664042 358 log.go:172] (0xc0001054a0) (0xc000661cc0) Stream removed, broadcasting: 1\nI0318 21:15:40.664065 358 log.go:172] (0xc0001054a0) (0xc000ad4000) Stream removed, broadcasting: 3\nI0318 21:15:40.664083 358 log.go:172] (0xc0001054a0) (0xc000382000) Stream removed, broadcasting: 5\n" Mar 18 21:15:40.668: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7383.svc.cluster.local\tcanonical name = externalsvc.services-7383.svc.cluster.local.\nName:\texternalsvc.services-7383.svc.cluster.local\nAddress: 10.107.166.177\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7383, will wait for the garbage collector to delete the pods Mar 18 21:15:40.728: INFO: Deleting ReplicationController externalsvc took: 7.067414ms Mar 18 21:15:40.829: INFO: Terminating ReplicationController externalsvc pods took: 100.240407ms Mar 18 21:15:49.573: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:15:49.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7383" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:19.585 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":37,"skipped":518,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:15:49.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:15:53.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2505" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":526,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:15:53.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-clcb8 in namespace proxy-6792 I0318 21:15:53.855941 6 runners.go:189] Created replication controller with name: proxy-service-clcb8, namespace: proxy-6792, replica count: 1 I0318 21:15:54.906342 6 runners.go:189] proxy-service-clcb8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 21:15:55.906572 6 runners.go:189] proxy-service-clcb8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 21:15:56.906800 6 runners.go:189] proxy-service-clcb8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 21:15:57.906989 6 runners.go:189] proxy-service-clcb8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 21:15:58.907167 6 runners.go:189] proxy-service-clcb8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 21:15:59.907434 6 runners.go:189] proxy-service-clcb8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 21:16:00.907676 6 runners.go:189] proxy-service-clcb8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 21:16:01.907882 6 runners.go:189] proxy-service-clcb8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 18 21:16:01.910: INFO: setup took 8.147506842s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 18 21:16:01.915: INFO: (0) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 4.46475ms) Mar 18 21:16:01.915: INFO: (0) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 4.546232ms) Mar 18 21:16:01.916: INFO: (0) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 4.916451ms) Mar 18 21:16:01.916: INFO: (0) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 5.250377ms) Mar 18 21:16:01.916: INFO: (0) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 5.661059ms) Mar 18 21:16:01.916: INFO: (0) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 5.834204ms) Mar 18 21:16:01.919: INFO: (0) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 8.556698ms) Mar 18 21:16:01.921: INFO: (0) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 10.380386ms) Mar 18 21:16:01.921: INFO: (0) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 10.532005ms) Mar 18 21:16:01.921: INFO: (0) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 10.484327ms) Mar 18 21:16:01.921: INFO: (0) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 10.461691ms) Mar 18 21:16:01.924: INFO: (0) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 12.885962ms) Mar 18 21:16:01.924: INFO: (0) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 12.874954ms) Mar 18 21:16:01.925: INFO: (0) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 14.804675ms) Mar 18 21:16:01.925: INFO: (0) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 14.719522ms) Mar 18 21:16:01.928: INFO: (0) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test<... (200; 27.942749ms) Mar 18 21:16:01.956: INFO: (1) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 28.110054ms) Mar 18 21:16:01.957: INFO: (1) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 28.187861ms) Mar 18 21:16:01.957: INFO: (1) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 28.281658ms) Mar 18 21:16:01.957: INFO: (1) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 28.910673ms) Mar 18 21:16:01.957: INFO: (1) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 28.945782ms) Mar 18 21:16:01.957: INFO: (1) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 29.090274ms) Mar 18 21:16:01.958: INFO: (1) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 29.498613ms) Mar 18 21:16:01.958: INFO: (1) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: ... (200; 29.702096ms) Mar 18 21:16:01.958: INFO: (1) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 29.686544ms) Mar 18 21:16:01.958: INFO: (1) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 29.712629ms) Mar 18 21:16:01.958: INFO: (1) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 29.644122ms) Mar 18 21:16:01.959: INFO: (1) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 30.814067ms) Mar 18 21:16:01.959: INFO: (1) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 30.808811ms) Mar 18 21:16:01.962: INFO: (2) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 2.949601ms) Mar 18 21:16:01.962: INFO: (2) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 3.097936ms) Mar 18 21:16:01.963: INFO: (2) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 3.291666ms) Mar 18 21:16:01.965: INFO: (2) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 6.046852ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 6.185104ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 6.292984ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 6.305171ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 6.402418ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test<... (200; 6.721555ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 6.819356ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 6.906707ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 6.843721ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 6.991413ms) Mar 18 21:16:01.966: INFO: (2) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 6.975288ms) Mar 18 21:16:01.972: INFO: (3) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 5.193275ms) Mar 18 21:16:01.972: INFO: (3) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 5.36278ms) Mar 18 21:16:01.972: INFO: (3) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 5.7136ms) Mar 18 21:16:01.972: INFO: (3) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 5.420881ms) Mar 18 21:16:01.972: INFO: (3) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 5.656196ms) Mar 18 21:16:01.972: INFO: (3) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 5.758988ms) Mar 18 21:16:01.972: INFO: (3) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: ... (200; 5.671867ms) Mar 18 21:16:01.973: INFO: (3) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 6.392818ms) Mar 18 21:16:01.973: INFO: (3) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 6.27744ms) Mar 18 21:16:01.973: INFO: (3) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 6.490066ms) Mar 18 21:16:01.973: INFO: (3) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 6.549843ms) Mar 18 21:16:01.973: INFO: (3) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 6.904049ms) Mar 18 21:16:01.973: INFO: (3) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 6.745711ms) Mar 18 21:16:01.973: INFO: (3) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 6.743747ms) Mar 18 21:16:01.974: INFO: (3) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 6.89519ms) Mar 18 21:16:01.978: INFO: (4) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.898111ms) Mar 18 21:16:01.978: INFO: (4) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 4.052016ms) Mar 18 21:16:01.978: INFO: (4) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 4.137189ms) Mar 18 21:16:01.978: INFO: (4) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 4.640306ms) Mar 18 21:16:01.978: INFO: (4) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 4.652445ms) Mar 18 21:16:01.978: INFO: (4) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 4.636211ms) Mar 18 21:16:01.978: INFO: (4) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: ... (200; 4.764046ms) Mar 18 21:16:01.978: INFO: (4) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 4.761742ms) Mar 18 21:16:01.978: INFO: (4) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 4.801695ms) Mar 18 21:16:01.979: INFO: (4) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 4.873269ms) Mar 18 21:16:01.979: INFO: (4) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 4.966182ms) Mar 18 21:16:01.979: INFO: (4) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 5.166224ms) Mar 18 21:16:01.979: INFO: (4) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 5.158969ms) Mar 18 21:16:01.979: INFO: (4) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 5.218301ms) Mar 18 21:16:01.979: INFO: (4) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 5.296969ms) Mar 18 21:16:01.983: INFO: (5) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 4.145788ms) Mar 18 21:16:01.983: INFO: (5) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 4.249938ms) Mar 18 21:16:01.983: INFO: (5) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 4.193573ms) Mar 18 21:16:01.983: INFO: (5) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 4.267861ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 4.441326ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 4.450405ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 4.433145ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 4.509409ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 4.787712ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 4.80913ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 4.830644ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 4.810781ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 4.877019ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: ... (200; 4.835579ms) Mar 18 21:16:01.984: INFO: (5) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 5.073471ms) Mar 18 21:16:01.987: INFO: (6) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 2.94096ms) Mar 18 21:16:01.987: INFO: (6) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 3.025433ms) Mar 18 21:16:01.987: INFO: (6) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 3.061716ms) Mar 18 21:16:01.987: INFO: (6) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test<... (200; 3.6901ms) Mar 18 21:16:01.988: INFO: (6) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 3.996096ms) Mar 18 21:16:01.988: INFO: (6) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 3.975681ms) Mar 18 21:16:01.988: INFO: (6) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 4.169161ms) Mar 18 21:16:01.988: INFO: (6) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 4.27588ms) Mar 18 21:16:01.988: INFO: (6) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 4.35742ms) Mar 18 21:16:01.989: INFO: (6) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 4.730479ms) Mar 18 21:16:01.989: INFO: (6) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 4.757297ms) Mar 18 21:16:01.989: INFO: (6) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 4.829001ms) Mar 18 21:16:01.989: INFO: (6) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 5.106141ms) Mar 18 21:16:01.989: INFO: (6) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 5.22693ms) Mar 18 21:16:01.992: INFO: (7) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 2.378889ms) Mar 18 21:16:01.993: INFO: (7) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.687193ms) Mar 18 21:16:01.993: INFO: (7) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 3.760508ms) Mar 18 21:16:01.993: INFO: (7) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 3.731156ms) Mar 18 21:16:01.994: INFO: (7) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 4.026826ms) Mar 18 21:16:01.994: INFO: (7) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 4.275463ms) Mar 18 21:16:01.994: INFO: (7) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 4.247072ms) Mar 18 21:16:01.994: INFO: (7) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 4.22655ms) Mar 18 21:16:01.994: INFO: (7) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test<... (200; 1.761653ms) Mar 18 21:16:01.998: INFO: (8) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 3.04521ms) Mar 18 21:16:01.998: INFO: (8) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 3.265373ms) Mar 18 21:16:01.998: INFO: (8) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.237029ms) Mar 18 21:16:01.998: INFO: (8) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 3.33538ms) Mar 18 21:16:01.998: INFO: (8) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 3.351312ms) Mar 18 21:16:01.998: INFO: (8) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 3.356007ms) Mar 18 21:16:01.998: INFO: (8) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.367531ms) Mar 18 21:16:01.998: INFO: (8) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: ... (200; 2.270263ms) Mar 18 21:16:02.003: INFO: (9) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 3.603883ms) Mar 18 21:16:02.003: INFO: (9) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 3.687175ms) Mar 18 21:16:02.004: INFO: (9) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.978937ms) Mar 18 21:16:02.004: INFO: (9) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 3.992275ms) Mar 18 21:16:02.004: INFO: (9) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 4.091328ms) Mar 18 21:16:02.004: INFO: (9) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test<... (200; 4.653527ms) Mar 18 21:16:02.004: INFO: (9) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 4.658049ms) Mar 18 21:16:02.005: INFO: (9) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 5.174538ms) Mar 18 21:16:02.005: INFO: (9) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 5.177565ms) Mar 18 21:16:02.005: INFO: (9) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 5.122981ms) Mar 18 21:16:02.005: INFO: (9) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 5.172271ms) Mar 18 21:16:02.005: INFO: (9) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 5.228171ms) Mar 18 21:16:02.005: INFO: (9) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 5.189127ms) Mar 18 21:16:02.007: INFO: (10) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 2.293513ms) Mar 18 21:16:02.007: INFO: (10) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 2.24683ms) Mar 18 21:16:02.007: INFO: (10) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 2.153144ms) Mar 18 21:16:02.015: INFO: (10) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 9.612744ms) Mar 18 21:16:02.015: INFO: (10) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 9.62426ms) Mar 18 21:16:02.015: INFO: (10) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 9.832222ms) Mar 18 21:16:02.015: INFO: (10) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 10.195883ms) Mar 18 21:16:02.015: INFO: (10) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test (200; 10.213395ms) Mar 18 21:16:02.015: INFO: (10) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 10.277904ms) Mar 18 21:16:02.015: INFO: (10) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 10.334291ms) Mar 18 21:16:02.016: INFO: (10) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 10.498457ms) Mar 18 21:16:02.016: INFO: (10) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 10.580816ms) Mar 18 21:16:02.016: INFO: (10) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 10.621877ms) Mar 18 21:16:02.016: INFO: (10) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 10.731186ms) Mar 18 21:16:02.016: INFO: (10) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 10.7442ms) Mar 18 21:16:02.020: INFO: (11) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 3.882141ms) Mar 18 21:16:02.020: INFO: (11) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 4.040375ms) Mar 18 21:16:02.020: INFO: (11) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 3.903255ms) Mar 18 21:16:02.020: INFO: (11) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 4.116515ms) Mar 18 21:16:02.020: INFO: (11) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 4.394013ms) Mar 18 21:16:02.020: INFO: (11) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 4.312979ms) Mar 18 21:16:02.021: INFO: (11) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 5.050486ms) Mar 18 21:16:02.021: INFO: (11) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 5.294251ms) Mar 18 21:16:02.021: INFO: (11) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 5.264512ms) Mar 18 21:16:02.021: INFO: (11) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 5.268507ms) Mar 18 21:16:02.021: INFO: (11) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 5.286271ms) Mar 18 21:16:02.021: INFO: (11) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test<... (200; 5.328374ms) Mar 18 21:16:02.021: INFO: (11) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 5.390133ms) Mar 18 21:16:02.021: INFO: (11) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 5.306881ms) Mar 18 21:16:02.022: INFO: (11) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 5.635718ms) Mar 18 21:16:02.025: INFO: (12) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 3.578976ms) Mar 18 21:16:02.025: INFO: (12) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.546114ms) Mar 18 21:16:02.026: INFO: (12) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 3.850885ms) Mar 18 21:16:02.026: INFO: (12) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: ... (200; 4.284436ms) Mar 18 21:16:02.027: INFO: (12) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 5.06786ms) Mar 18 21:16:02.027: INFO: (12) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 5.120364ms) Mar 18 21:16:02.027: INFO: (12) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 5.116606ms) Mar 18 21:16:02.027: INFO: (12) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 5.403342ms) Mar 18 21:16:02.027: INFO: (12) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 5.526595ms) Mar 18 21:16:02.027: INFO: (12) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 5.464137ms) Mar 18 21:16:02.027: INFO: (12) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 5.522482ms) Mar 18 21:16:02.028: INFO: (12) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 6.073183ms) Mar 18 21:16:02.028: INFO: (12) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 5.979269ms) Mar 18 21:16:02.028: INFO: (12) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 6.014449ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.896449ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 3.9639ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 3.979329ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 3.938559ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 3.951421ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test (200; 3.977702ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 3.993278ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 4.533469ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 4.559647ms) Mar 18 21:16:02.032: INFO: (13) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 4.554508ms) Mar 18 21:16:02.033: INFO: (13) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 4.707103ms) Mar 18 21:16:02.033: INFO: (13) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 4.798955ms) Mar 18 21:16:02.033: INFO: (13) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 4.841146ms) Mar 18 21:16:02.036: INFO: (14) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.4801ms) Mar 18 21:16:02.036: INFO: (14) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 3.559134ms) Mar 18 21:16:02.037: INFO: (14) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: ... (200; 3.62397ms) Mar 18 21:16:02.037: INFO: (14) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 3.742199ms) Mar 18 21:16:02.037: INFO: (14) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 3.768121ms) Mar 18 21:16:02.037: INFO: (14) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.817292ms) Mar 18 21:16:02.037: INFO: (14) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 4.628737ms) Mar 18 21:16:02.037: INFO: (14) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 4.584948ms) Mar 18 21:16:02.038: INFO: (14) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 4.643377ms) Mar 18 21:16:02.038: INFO: (14) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 4.967951ms) Mar 18 21:16:02.038: INFO: (14) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 4.897815ms) Mar 18 21:16:02.038: INFO: (14) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 4.90369ms) Mar 18 21:16:02.038: INFO: (14) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 5.105545ms) Mar 18 21:16:02.038: INFO: (14) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 4.99971ms) Mar 18 21:16:02.041: INFO: (15) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 3.315595ms) Mar 18 21:16:02.042: INFO: (15) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 3.744513ms) Mar 18 21:16:02.042: INFO: (15) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 3.72842ms) Mar 18 21:16:02.042: INFO: (15) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.836542ms) Mar 18 21:16:02.042: INFO: (15) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 3.986062ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test<... (200; 4.523882ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 4.54153ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 4.573745ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 4.605105ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 4.969159ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 5.108168ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 5.16123ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 5.172683ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 5.132941ms) Mar 18 21:16:02.043: INFO: (15) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 5.167037ms) Mar 18 21:16:02.045: INFO: (16) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test<... (200; 4.408067ms) Mar 18 21:16:02.048: INFO: (16) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 4.390365ms) Mar 18 21:16:02.048: INFO: (16) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 4.422679ms) Mar 18 21:16:02.048: INFO: (16) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 5.023851ms) Mar 18 21:16:02.048: INFO: (16) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 5.025686ms) Mar 18 21:16:02.048: INFO: (16) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 5.049742ms) Mar 18 21:16:02.049: INFO: (16) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 5.261871ms) Mar 18 21:16:02.049: INFO: (16) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 5.330029ms) Mar 18 21:16:02.049: INFO: (16) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 5.476331ms) Mar 18 21:16:02.049: INFO: (16) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 5.416698ms) Mar 18 21:16:02.049: INFO: (16) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 5.730218ms) Mar 18 21:16:02.049: INFO: (16) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 5.806915ms) Mar 18 21:16:02.049: INFO: (16) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 5.808971ms) Mar 18 21:16:02.049: INFO: (16) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 5.924128ms) Mar 18 21:16:02.052: INFO: (17) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 2.936151ms) Mar 18 21:16:02.054: INFO: (17) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 4.096597ms) Mar 18 21:16:02.054: INFO: (17) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 4.283878ms) Mar 18 21:16:02.054: INFO: (17) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 4.471153ms) Mar 18 21:16:02.054: INFO: (17) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test<... (200; 5.697702ms) Mar 18 21:16:02.055: INFO: (17) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 5.779985ms) Mar 18 21:16:02.055: INFO: (17) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 5.617241ms) Mar 18 21:16:02.055: INFO: (17) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 5.685182ms) Mar 18 21:16:02.055: INFO: (17) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 5.749519ms) Mar 18 21:16:02.076: INFO: (18) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 20.975917ms) Mar 18 21:16:02.076: INFO: (18) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 21.031244ms) Mar 18 21:16:02.076: INFO: (18) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 20.96565ms) Mar 18 21:16:02.076: INFO: (18) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:462/proxy/: tls qux (200; 21.105466ms) Mar 18 21:16:02.077: INFO: (18) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:460/proxy/: tls baz (200; 21.383547ms) Mar 18 21:16:02.077: INFO: (18) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 21.382914ms) Mar 18 21:16:02.077: INFO: (18) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: test (200; 21.854705ms) Mar 18 21:16:02.077: INFO: (18) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 21.926501ms) Mar 18 21:16:02.078: INFO: (18) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 22.381353ms) Mar 18 21:16:02.078: INFO: (18) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname1/proxy/: tls baz (200; 22.535406ms) Mar 18 21:16:02.078: INFO: (18) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 22.74836ms) Mar 18 21:16:02.078: INFO: (18) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname2/proxy/: bar (200; 22.874551ms) Mar 18 21:16:02.079: INFO: (18) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 23.394328ms) Mar 18 21:16:02.079: INFO: (18) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 23.315663ms) Mar 18 21:16:02.082: INFO: (19) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 3.055662ms) Mar 18 21:16:02.082: INFO: (19) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:1080/proxy/: ... (200; 3.455659ms) Mar 18 21:16:02.085: INFO: (19) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj/proxy/: test (200; 6.319728ms) Mar 18 21:16:02.085: INFO: (19) /api/v1/namespaces/proxy-6792/services/http:proxy-service-clcb8:portname1/proxy/: foo (200; 6.488102ms) Mar 18 21:16:02.085: INFO: (19) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname1/proxy/: foo (200; 6.56801ms) Mar 18 21:16:02.085: INFO: (19) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 6.505499ms) Mar 18 21:16:02.086: INFO: (19) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:1080/proxy/: test<... (200; 6.639557ms) Mar 18 21:16:02.086: INFO: (19) /api/v1/namespaces/proxy-6792/services/proxy-service-clcb8:portname2/proxy/: bar (200; 6.601662ms) Mar 18 21:16:02.086: INFO: (19) /api/v1/namespaces/proxy-6792/services/https:proxy-service-clcb8:tlsportname2/proxy/: tls qux (200; 6.820138ms) Mar 18 21:16:02.086: INFO: (19) /api/v1/namespaces/proxy-6792/pods/http:proxy-service-clcb8-m7wtj:162/proxy/: bar (200; 6.6685ms) Mar 18 21:16:02.086: INFO: (19) /api/v1/namespaces/proxy-6792/pods/proxy-service-clcb8-m7wtj:160/proxy/: foo (200; 6.789839ms) Mar 18 21:16:02.086: INFO: (19) /api/v1/namespaces/proxy-6792/pods/https:proxy-service-clcb8-m7wtj:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:16:09.335: INFO: Creating deployment "webserver-deployment" Mar 18 21:16:09.343: INFO: Waiting for observed generation 1 Mar 18 21:16:11.365: INFO: Waiting for all required pods to come up Mar 18 21:16:11.370: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 18 21:16:19.380: INFO: Waiting for deployment "webserver-deployment" to complete Mar 18 21:16:19.384: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 18 21:16:19.389: INFO: Updating deployment webserver-deployment Mar 18 21:16:19.389: INFO: Waiting for observed generation 2 Mar 18 21:16:21.555: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 18 21:16:21.558: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 18 21:16:21.592: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 18 21:16:21.599: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 18 21:16:21.599: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 18 21:16:21.602: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 18 21:16:21.606: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 18 21:16:21.606: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 18 21:16:21.611: INFO: Updating deployment webserver-deployment Mar 18 21:16:21.611: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 18 21:16:21.641: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 18 21:16:21.682: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 18 21:16:22.055: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3938 /apis/apps/v1/namespaces/deployment-3938/deployments/webserver-deployment 32d2bed4-9bfc-4a90-9bcd-15b23cccaa77 847873 3 2020-03-18 21:16:09 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e04d68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-18 21:16:19 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-18 21:16:21 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 18 21:16:22.179: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-3938 /apis/apps/v1/namespaces/deployment-3938/replicasets/webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 847920 3 2020-03-18 21:16:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 32d2bed4-9bfc-4a90-9bcd-15b23cccaa77 0xc002e05247 0xc002e05248}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e052b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 18 21:16:22.179: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 18 21:16:22.179: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-3938 /apis/apps/v1/namespaces/deployment-3938/replicasets/webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 847914 3 2020-03-18 21:16:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 32d2bed4-9bfc-4a90-9bcd-15b23cccaa77 0xc002e05187 0xc002e05188}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e051e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 18 21:16:22.331: INFO: Pod "webserver-deployment-595b5b9587-25xb2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-25xb2 webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-25xb2 ff2bf39b-61ea-4a64-8ac2-dedea2a5b0fe 847899 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228abb7 0xc00228abb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.331: INFO: Pod "webserver-deployment-595b5b9587-2x8jt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2x8jt webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-2x8jt d58a4709-039e-4f16-a886-d409a3e7deeb 847913 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228acd0 0xc00228acd1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.331: INFO: Pod "webserver-deployment-595b5b9587-5jx8f" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5jx8f webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-5jx8f 1ba3d09a-c040-4a10-a140-75c779d9672f 847756 0 2020-03-18 21:16:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228ade0 0xc00228ade1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.205,StartTime:2020-03-18 21:16:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:16:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://942b436de530d12463da9c1eb2fe4549bc8b69d5fdc3884a0f70e4e235ceb2f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.205,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.332: INFO: Pod "webserver-deployment-595b5b9587-5xnpr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5xnpr webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-5xnpr 28a337f8-7511-411b-b709-62dfdc4a5ff2 847898 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228af57 0xc00228af58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.332: INFO: Pod "webserver-deployment-595b5b9587-6gpj9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6gpj9 webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-6gpj9 8f4a33db-d3c4-4167-81ba-0fb6c5d5c101 847781 0 2020-03-18 21:16:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228b070 0xc00228b071}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.208,StartTime:2020-03-18 21:16:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:16:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://819cfa9fd7905178a821696f88ab3f62fe133fa98911d30e4bc6964df224d9dc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.332: INFO: Pod "webserver-deployment-595b5b9587-9jxhl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9jxhl webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-9jxhl a721f981-65ce-4b16-98f1-925e44ffface 847733 0 2020-03-18 21:16:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228b1e7 0xc00228b1e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.233,StartTime:2020-03-18 21:16:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:16:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://968dcb44230c60f74b95fa48911a5e4e648efd20662eea8e5c74cbd559cedefb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.332: INFO: Pod "webserver-deployment-595b5b9587-d4v6p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d4v6p webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-d4v6p d8bc477c-591f-4365-b632-c3de9d1d75e7 847912 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228b360 0xc00228b361}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.332: INFO: Pod "webserver-deployment-595b5b9587-f2prz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f2prz webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-f2prz c2240a06-5375-4ab0-85d9-352e72f0bb4a 847910 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228b470 0xc00228b471}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.332: INFO: Pod "webserver-deployment-595b5b9587-gdz25" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gdz25 webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-gdz25 6399a4c1-2735-4139-8821-f139a332f813 847749 0 2020-03-18 21:16:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228b580 0xc00228b581}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.204,StartTime:2020-03-18 21:16:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:16:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5761cc8a908a5953212d89e39b6b650c75bb1e328a812bd3afff471a3acddc7c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.332: INFO: Pod "webserver-deployment-595b5b9587-gtvjt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gtvjt webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-gtvjt ecbafa90-ce57-4d0d-a302-46582cc500ee 847874 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228b6f7 0xc00228b6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.333: INFO: Pod "webserver-deployment-595b5b9587-hgb5k" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hgb5k webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-hgb5k 3f8428aa-2931-4823-8fe5-74e5d4a2983b 847909 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228b810 0xc00228b811}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.333: INFO: Pod "webserver-deployment-595b5b9587-hmmlj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hmmlj webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-hmmlj 9210805f-d5a1-419a-8ae2-93cc4684431a 847716 0 2020-03-18 21:16:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228b920 0xc00228b921}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.232,StartTime:2020-03-18 21:16:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:16:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://908f63a4c13a2bd74e75faaa46aa7f483916b8660e3dc0aa65e78106e3d1d8a1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.333: INFO: Pod "webserver-deployment-595b5b9587-k2f2q" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k2f2q webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-k2f2q 3e1afb3f-c575-4eaf-9ffb-3c72360bb189 847779 0 2020-03-18 21:16:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228ba90 0xc00228ba91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.207,StartTime:2020-03-18 21:16:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:16:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c322336e4fffd87f2db6707cdbd4778bc6efb20ad4aaacdc4c4efae6b37a9549,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.333: INFO: Pod "webserver-deployment-595b5b9587-kdd59" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kdd59 webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-kdd59 4d5ebdce-0ebc-42cb-85e1-d2c6bea2cc31 847923 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228bc07 0xc00228bc08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-18 21:16:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.333: INFO: Pod "webserver-deployment-595b5b9587-kg8hf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kg8hf webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-kg8hf c32e211f-6c4b-4384-ab7f-0c24aa07e495 847897 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228bd70 0xc00228bd71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.333: INFO: Pod "webserver-deployment-595b5b9587-pwltg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pwltg webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-pwltg 03bf2e91-45d3-427b-aefa-d44cc825da60 847753 0 2020-03-18 21:16:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc00228be80 0xc00228be81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.234,StartTime:2020-03-18 21:16:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:16:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5cee9f4382d0ac77ca7a494fdf982638bebfb8363489408b7e4df4addb882425,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.334: INFO: Pod "webserver-deployment-595b5b9587-vj97z" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vj97z webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-vj97z f5475ee1-c6b7-4a99-976f-e8332357ffdf 847876 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc002d4e000 0xc002d4e001}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.334: INFO: Pod "webserver-deployment-595b5b9587-vltt5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vltt5 webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-vltt5 bb052c69-14ba-4672-b823-ef0ceaedddad 847911 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc002d4e1c0 0xc002d4e1c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.334: INFO: Pod "webserver-deployment-595b5b9587-vwzqk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vwzqk webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-vwzqk 8344c8ca-b083-42ef-98f7-aada1434b0b6 847900 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc002d4e450 0xc002d4e451}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.334: INFO: Pod "webserver-deployment-595b5b9587-wwgr7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wwgr7 webserver-deployment-595b5b9587- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-595b5b9587-wwgr7 605e411e-f254-47c9-90f4-6f0522c41580 847785 0 2020-03-18 21:16:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 95c0857d-4160-4be2-9154-c53a49d74928 0xc002d4e640 0xc002d4e641}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.206,StartTime:2020-03-18 21:16:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:16:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://15e1ce4b0a4bded3946ff5ec277b9dbb9514f3c4d448df49c3922c406d806cc1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.334: INFO: Pod "webserver-deployment-c7997dcc8-4b6vm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4b6vm webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-4b6vm b1622067-c277-4a7e-aa84-d36fd0d39f73 847850 0 2020-03-18 21:16:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4e8e7 0xc002d4e8e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-18 21:16:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.334: INFO: Pod "webserver-deployment-c7997dcc8-696xb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-696xb webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-696xb 9e5fea79-ff33-46a7-b758-49f6abf434b3 847904 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4eb80 0xc002d4eb81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.334: INFO: Pod "webserver-deployment-c7997dcc8-6qjbb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6qjbb webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-6qjbb 6edf82cf-852d-4680-8d97-a1da865ef754 847848 0 2020-03-18 21:16:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4eca0 0xc002d4eca1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-18 21:16:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.335: INFO: Pod "webserver-deployment-c7997dcc8-7mrq5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7mrq5 webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-7mrq5 599ed384-1cb9-426d-829a-0746beb92c05 847826 0 2020-03-18 21:16:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4efe0 0xc002d4efe1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-18 21:16:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.335: INFO: Pod "webserver-deployment-c7997dcc8-bzggk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bzggk webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-bzggk 587ff7ba-a923-4e7e-be35-12d585d462ba 847918 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4f180 0xc002d4f181}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.335: INFO: Pod "webserver-deployment-c7997dcc8-j2hdh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j2hdh webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-j2hdh 2a9e75d1-fcf7-42dc-b29e-a2ce761d5716 847915 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4f2b0 0xc002d4f2b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.335: INFO: Pod "webserver-deployment-c7997dcc8-jprct" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jprct webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-jprct 439147ca-e4bb-4942-94f0-3530c8b732ca 847831 0 2020-03-18 21:16:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4f3e0 0xc002d4f3e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-18 21:16:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.335: INFO: Pod "webserver-deployment-c7997dcc8-k9gcb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k9gcb webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-k9gcb 211dc0c8-e8a9-4474-bd59-eb02dc63ebc5 847907 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4f550 0xc002d4f551}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.336: INFO: Pod "webserver-deployment-c7997dcc8-ktps4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ktps4 webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-ktps4 bb5af8ff-bf22-4e7c-a3cf-e0739bb0ba91 847841 0 2020-03-18 21:16:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4f680 0xc002d4f681}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-18 21:16:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.336: INFO: Pod "webserver-deployment-c7997dcc8-rx9ch" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rx9ch webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-rx9ch 0e2e1b2c-58a6-4663-ba42-daf1c53d9f13 847906 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4f7f0 0xc002d4f7f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.336: INFO: Pod "webserver-deployment-c7997dcc8-tc5jw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tc5jw webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-tc5jw 2768fd1a-7446-4c9d-a844-bbe67be35f26 847924 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4fb10 0xc002d4fb11}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-18 21:16:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.336: INFO: Pod "webserver-deployment-c7997dcc8-vgn6j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vgn6j webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-vgn6j e7e609eb-aff8-4d19-a38b-310343946d0f 847896 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4fca0 0xc002d4fca1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:16:22.336: INFO: Pod "webserver-deployment-c7997dcc8-whf7l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-whf7l webserver-deployment-c7997dcc8- deployment-3938 /api/v1/namespaces/deployment-3938/pods/webserver-deployment-c7997dcc8-whf7l 9064d1f0-56d3-4f8a-b65b-8a652dc241d9 847877 0 2020-03-18 21:16:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c9c6146c-a472-4cee-b557-56a903744a1f 0xc002d4fdc0 0xc002d4fdc1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hf4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hf4ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hf4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:16:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:16:22.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3938" for this suite. • [SLOW TEST:13.273 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":40,"skipped":578,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:16:22.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-38effd4f-dc91-4f60-90ea-0f7fbbcecf36 STEP: Creating a pod to test consume configMaps Mar 18 21:16:22.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655" in namespace "configmap-9376" to be "success or failure" Mar 18 21:16:22.797: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655": Phase="Pending", Reason="", readiness=false. Elapsed: 35.955752ms Mar 18 21:16:25.179: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417922895s Mar 18 21:16:27.612: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655": Phase="Pending", Reason="", readiness=false. Elapsed: 4.851645416s Mar 18 21:16:30.347: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655": Phase="Pending", Reason="", readiness=false. Elapsed: 7.58631593s Mar 18 21:16:32.724: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655": Phase="Pending", Reason="", readiness=false. Elapsed: 9.962921639s Mar 18 21:16:34.774: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655": Phase="Pending", Reason="", readiness=false. Elapsed: 12.01268987s Mar 18 21:16:36.837: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655": Phase="Running", Reason="", readiness=true. Elapsed: 14.076217824s Mar 18 21:16:38.840: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655": Phase="Running", Reason="", readiness=true. Elapsed: 16.07931408s Mar 18 21:16:40.843: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.082309586s STEP: Saw pod success Mar 18 21:16:40.843: INFO: Pod "pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655" satisfied condition "success or failure" Mar 18 21:16:40.873: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655 container configmap-volume-test: STEP: delete the pod Mar 18 21:16:40.922: INFO: Waiting for pod pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655 to disappear Mar 18 21:16:40.931: INFO: Pod pod-configmaps-68bc32fa-4aa0-49c9-97b4-19121e80f655 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:16:40.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9376" for this suite. • [SLOW TEST:18.410 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":582,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:16:40.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:16:41.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ba9e461-43e5-46a2-b258-b6104e3b4a69" in namespace "projected-6093" to be "success or failure" Mar 18 21:16:41.023: INFO: Pod "downwardapi-volume-6ba9e461-43e5-46a2-b258-b6104e3b4a69": Phase="Pending", Reason="", readiness=false. Elapsed: 17.734853ms Mar 18 21:16:43.027: INFO: Pod "downwardapi-volume-6ba9e461-43e5-46a2-b258-b6104e3b4a69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02177205s Mar 18 21:16:45.035: INFO: Pod "downwardapi-volume-6ba9e461-43e5-46a2-b258-b6104e3b4a69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029320394s STEP: Saw pod success Mar 18 21:16:45.035: INFO: Pod "downwardapi-volume-6ba9e461-43e5-46a2-b258-b6104e3b4a69" satisfied condition "success or failure" Mar 18 21:16:45.037: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6ba9e461-43e5-46a2-b258-b6104e3b4a69 container client-container: STEP: delete the pod Mar 18 21:16:45.053: INFO: Waiting for pod downwardapi-volume-6ba9e461-43e5-46a2-b258-b6104e3b4a69 to disappear Mar 18 21:16:45.082: INFO: Pod downwardapi-volume-6ba9e461-43e5-46a2-b258-b6104e3b4a69 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:16:45.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6093" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":592,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:16:45.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-01c95fbb-9097-4e6b-b6e0-3ea7b4cf72e8 in namespace container-probe-9997 Mar 18 21:16:49.225: INFO: Started pod busybox-01c95fbb-9097-4e6b-b6e0-3ea7b4cf72e8 in namespace container-probe-9997 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 21:16:49.229: INFO: Initial restart count of pod busybox-01c95fbb-9097-4e6b-b6e0-3ea7b4cf72e8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:20:49.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9997" for this suite. • [SLOW TEST:244.865 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":602,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:20:49.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:20:50.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8525" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":619,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:20:50.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 18 21:20:50.643: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 18 21:20:52.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720163250, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720163250, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720163250, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720163250, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:20:55.684: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:20:55.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:20:56.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-97" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.015 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":45,"skipped":620,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:20:57.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 18 21:20:57.168: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6772 /api/v1/namespaces/watch-6772/configmaps/e2e-watch-test-watch-closed f6339ecb-ac8f-474b-a682-e95cbbcbd9dc 849091 0 2020-03-18 21:20:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 21:20:57.168: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6772 /api/v1/namespaces/watch-6772/configmaps/e2e-watch-test-watch-closed f6339ecb-ac8f-474b-a682-e95cbbcbd9dc 849092 0 2020-03-18 21:20:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 18 21:20:57.179: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6772 /api/v1/namespaces/watch-6772/configmaps/e2e-watch-test-watch-closed f6339ecb-ac8f-474b-a682-e95cbbcbd9dc 849093 0 2020-03-18 21:20:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 21:20:57.180: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6772 /api/v1/namespaces/watch-6772/configmaps/e2e-watch-test-watch-closed f6339ecb-ac8f-474b-a682-e95cbbcbd9dc 849094 0 2020-03-18 21:20:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:20:57.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6772" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":46,"skipped":637,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:20:57.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-60f55082-55e3-47e8-95b6-b2c6ef303023 in namespace container-probe-8064 Mar 18 21:21:01.316: INFO: Started pod liveness-60f55082-55e3-47e8-95b6-b2c6ef303023 in namespace container-probe-8064 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 21:21:01.337: INFO: Initial restart count of pod liveness-60f55082-55e3-47e8-95b6-b2c6ef303023 is 0 Mar 18 21:21:15.386: INFO: Restart count of pod container-probe-8064/liveness-60f55082-55e3-47e8-95b6-b2c6ef303023 is now 1 (14.048858968s elapsed) Mar 18 21:21:35.428: INFO: Restart count of pod container-probe-8064/liveness-60f55082-55e3-47e8-95b6-b2c6ef303023 is now 2 (34.090902931s elapsed) Mar 18 21:21:55.474: INFO: Restart count of pod container-probe-8064/liveness-60f55082-55e3-47e8-95b6-b2c6ef303023 is now 3 (54.13683177s elapsed) Mar 18 21:22:15.522: INFO: Restart count of pod container-probe-8064/liveness-60f55082-55e3-47e8-95b6-b2c6ef303023 is now 4 (1m14.185022277s elapsed) Mar 18 21:23:17.733: INFO: Restart count of pod container-probe-8064/liveness-60f55082-55e3-47e8-95b6-b2c6ef303023 is now 5 (2m16.395638146s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:23:17.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8064" for this suite. • [SLOW TEST:140.613 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":643,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:23:17.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 18 21:23:17.858: INFO: Waiting up to 5m0s for pod "pod-1f9cad80-62d6-49a5-8f9f-43a272251d2f" in namespace "emptydir-5757" to be "success or failure" Mar 18 21:23:18.083: INFO: Pod "pod-1f9cad80-62d6-49a5-8f9f-43a272251d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 225.131586ms Mar 18 21:23:20.087: INFO: Pod "pod-1f9cad80-62d6-49a5-8f9f-43a272251d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229297169s Mar 18 21:23:22.091: INFO: Pod "pod-1f9cad80-62d6-49a5-8f9f-43a272251d2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.23307031s STEP: Saw pod success Mar 18 21:23:22.091: INFO: Pod "pod-1f9cad80-62d6-49a5-8f9f-43a272251d2f" satisfied condition "success or failure" Mar 18 21:23:22.094: INFO: Trying to get logs from node jerma-worker pod pod-1f9cad80-62d6-49a5-8f9f-43a272251d2f container test-container: STEP: delete the pod Mar 18 21:23:22.139: INFO: Waiting for pod pod-1f9cad80-62d6-49a5-8f9f-43a272251d2f to disappear Mar 18 21:23:22.150: INFO: Pod pod-1f9cad80-62d6-49a5-8f9f-43a272251d2f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:23:22.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5757" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":654,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:23:22.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:23:22.234: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 18 21:23:27.252: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 18 21:23:27.252: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 18 21:23:27.372: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5656 /apis/apps/v1/namespaces/deployment-5656/deployments/test-cleanup-deployment a0090e68-4c87-4c4c-9077-c5e0eab45ae9 849638 1 2020-03-18 21:23:27 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d4fb28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 18 21:23:27.480: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-5656 /apis/apps/v1/namespaces/deployment-5656/replicasets/test-cleanup-deployment-55ffc6b7b6 18bf9bda-781f-4af8-ae7b-d97f3ee5dade 849645 1 2020-03-18 21:23:27 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a0090e68-4c87-4c4c-9077-c5e0eab45ae9 0xc002d4ff37 0xc002d4ff38}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d4ffa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 18 21:23:27.480: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 18 21:23:27.480: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5656 /apis/apps/v1/namespaces/deployment-5656/replicasets/test-cleanup-controller b732c531-ee73-4a4a-bfbb-9c96b89b8c79 849640 1 2020-03-18 21:23:22 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment a0090e68-4c87-4c4c-9077-c5e0eab45ae9 0xc002d4fe67 0xc002d4fe68}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d4fec8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 18 21:23:27.498: INFO: Pod "test-cleanup-controller-7pwbb" is available: &Pod{ObjectMeta:{test-cleanup-controller-7pwbb test-cleanup-controller- deployment-5656 /api/v1/namespaces/deployment-5656/pods/test-cleanup-controller-7pwbb ea398b1e-53ea-41f3-a162-13d91cd71527 849625 0 2020-03-18 21:23:22 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller b732c531-ee73-4a4a-bfbb-9c96b89b8c79 0xc002dbdf87 0xc002dbdf88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zjdvd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zjdvd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zjdvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:23:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:23:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.252,StartTime:2020-03-18 21:23:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 21:23:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://958a7fe99ee1bab6327870ec31fd38b2e75da6d85c2cc07b7d399d77bd04d948,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 18 21:23:27.498: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-sms52" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-sms52 test-cleanup-deployment-55ffc6b7b6- deployment-5656 /api/v1/namespaces/deployment-5656/pods/test-cleanup-deployment-55ffc6b7b6-sms52 7d2a3834-9752-442a-9331-8e0a14406689 849647 0 2020-03-18 21:23:27 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 18bf9bda-781f-4af8-ae7b-d97f3ee5dade 0xc001d923f7 0xc001d923f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zjdvd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zjdvd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zjdvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:23:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:23:27.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5656" for this suite. • [SLOW TEST:5.366 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":49,"skipped":658,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:23:27.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:23:28.151: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:23:30.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720163408, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720163408, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720163408, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720163408, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:23:33.202: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:23:45.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-465" for this suite. STEP: Destroying namespace "webhook-465-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.945 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":50,"skipped":661,"failed":0} [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:23:45.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-d361e870-3666-472b-823e-3eded8f16967 Mar 18 21:23:45.648: INFO: Pod name my-hostname-basic-d361e870-3666-472b-823e-3eded8f16967: Found 0 pods out of 1 Mar 18 21:23:50.655: INFO: Pod name my-hostname-basic-d361e870-3666-472b-823e-3eded8f16967: Found 1 pods out of 1 Mar 18 21:23:50.655: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d361e870-3666-472b-823e-3eded8f16967" are running Mar 18 21:23:50.660: INFO: Pod "my-hostname-basic-d361e870-3666-472b-823e-3eded8f16967-rsgfr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 21:23:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 21:23:47 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 21:23:47 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 21:23:45 +0000 UTC Reason: Message:}]) Mar 18 21:23:50.660: INFO: Trying to dial the pod Mar 18 21:23:55.673: INFO: Controller my-hostname-basic-d361e870-3666-472b-823e-3eded8f16967: Got expected result from replica 1 [my-hostname-basic-d361e870-3666-472b-823e-3eded8f16967-rsgfr]: "my-hostname-basic-d361e870-3666-472b-823e-3eded8f16967-rsgfr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:23:55.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1905" for this suite. • [SLOW TEST:10.214 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":51,"skipped":661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:23:55.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-0f31cc6f-774e-4337-9990-c1ccb686f26a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0f31cc6f-774e-4337-9990-c1ccb686f26a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:24:03.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3028" for this suite. • [SLOW TEST:8.148 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:24:03.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:24:19.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7477" for this suite. • [SLOW TEST:16.084 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":53,"skipped":722,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:24:19.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-02121d50-82c6-47c8-8017-19820bbf47b8 STEP: Creating a pod to test consume configMaps Mar 18 21:24:20.002: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-96df6b66-b43d-4832-b6ce-dd3e2bc74471" in namespace "projected-373" to be "success or failure" Mar 18 21:24:20.008: INFO: Pod "pod-projected-configmaps-96df6b66-b43d-4832-b6ce-dd3e2bc74471": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132709ms Mar 18 21:24:22.012: INFO: Pod "pod-projected-configmaps-96df6b66-b43d-4832-b6ce-dd3e2bc74471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009874658s Mar 18 21:24:24.016: INFO: Pod "pod-projected-configmaps-96df6b66-b43d-4832-b6ce-dd3e2bc74471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013732561s STEP: Saw pod success Mar 18 21:24:24.016: INFO: Pod "pod-projected-configmaps-96df6b66-b43d-4832-b6ce-dd3e2bc74471" satisfied condition "success or failure" Mar 18 21:24:24.019: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-96df6b66-b43d-4832-b6ce-dd3e2bc74471 container projected-configmap-volume-test: STEP: delete the pod Mar 18 21:24:24.084: INFO: Waiting for pod pod-projected-configmaps-96df6b66-b43d-4832-b6ce-dd3e2bc74471 to disappear Mar 18 21:24:24.087: INFO: Pod pod-projected-configmaps-96df6b66-b43d-4832-b6ce-dd3e2bc74471 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:24:24.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-373" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":742,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:24:24.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9803 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9803 I0318 21:24:24.236425 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9803, replica count: 2 I0318 21:24:27.286810 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 21:24:30.287066 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 18 21:24:30.287: INFO: Creating new exec pod Mar 18 21:24:35.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9803 execpod628bx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 18 21:24:38.087: INFO: stderr: "I0318 21:24:38.018813 380 log.go:172] (0xc000876bb0) (0xc000635ea0) Create stream\nI0318 21:24:38.018854 380 log.go:172] (0xc000876bb0) (0xc000635ea0) Stream added, broadcasting: 1\nI0318 21:24:38.022118 380 log.go:172] (0xc000876bb0) Reply frame received for 1\nI0318 21:24:38.022181 380 log.go:172] (0xc000876bb0) (0xc000598640) Create stream\nI0318 21:24:38.022201 380 log.go:172] (0xc000876bb0) (0xc000598640) Stream added, broadcasting: 3\nI0318 21:24:38.023214 380 log.go:172] (0xc000876bb0) Reply frame received for 3\nI0318 21:24:38.023271 380 log.go:172] (0xc000876bb0) (0xc000725e00) Create stream\nI0318 21:24:38.023287 380 log.go:172] (0xc000876bb0) (0xc000725e00) Stream added, broadcasting: 5\nI0318 21:24:38.024163 380 log.go:172] (0xc000876bb0) Reply frame received for 5\nI0318 21:24:38.080748 380 log.go:172] (0xc000876bb0) Data frame received for 5\nI0318 21:24:38.080768 380 log.go:172] (0xc000725e00) (5) Data frame handling\nI0318 21:24:38.080778 380 log.go:172] (0xc000725e00) (5) Data frame sent\nI0318 21:24:38.080784 380 log.go:172] (0xc000876bb0) Data frame received for 5\nI0318 21:24:38.080788 380 log.go:172] (0xc000725e00) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0318 21:24:38.080806 380 log.go:172] (0xc000725e00) (5) Data frame sent\nI0318 21:24:38.081039 380 log.go:172] (0xc000876bb0) Data frame received for 3\nI0318 21:24:38.081047 380 log.go:172] (0xc000598640) (3) Data frame handling\nI0318 21:24:38.081102 380 log.go:172] (0xc000876bb0) Data frame received for 5\nI0318 21:24:38.081190 380 log.go:172] (0xc000725e00) (5) Data frame handling\nI0318 21:24:38.083434 380 log.go:172] (0xc000876bb0) Data frame received for 1\nI0318 21:24:38.083449 380 log.go:172] (0xc000635ea0) (1) Data frame handling\nI0318 21:24:38.083467 380 log.go:172] (0xc000635ea0) (1) Data frame sent\nI0318 21:24:38.083557 380 log.go:172] (0xc000876bb0) (0xc000635ea0) Stream removed, broadcasting: 1\nI0318 21:24:38.083652 380 log.go:172] (0xc000876bb0) Go away received\nI0318 21:24:38.083942 380 log.go:172] (0xc000876bb0) (0xc000635ea0) Stream removed, broadcasting: 1\nI0318 21:24:38.083965 380 log.go:172] (0xc000876bb0) (0xc000598640) Stream removed, broadcasting: 3\nI0318 21:24:38.083978 380 log.go:172] (0xc000876bb0) (0xc000725e00) Stream removed, broadcasting: 5\n" Mar 18 21:24:38.088: INFO: stdout: "" Mar 18 21:24:38.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9803 execpod628bx -- /bin/sh -x -c nc -zv -t -w 2 10.100.78.104 80' Mar 18 21:24:38.293: INFO: stderr: "I0318 21:24:38.230360 414 log.go:172] (0xc00010b4a0) (0xc000693ae0) Create stream\nI0318 21:24:38.230415 414 log.go:172] (0xc00010b4a0) (0xc000693ae0) Stream added, broadcasting: 1\nI0318 21:24:38.233217 414 log.go:172] (0xc00010b4a0) Reply frame received for 1\nI0318 21:24:38.233273 414 log.go:172] (0xc00010b4a0) (0xc00020e000) Create stream\nI0318 21:24:38.233287 414 log.go:172] (0xc00010b4a0) (0xc00020e000) Stream added, broadcasting: 3\nI0318 21:24:38.234141 414 log.go:172] (0xc00010b4a0) Reply frame received for 3\nI0318 21:24:38.234183 414 log.go:172] (0xc00010b4a0) (0xc00021a000) Create stream\nI0318 21:24:38.234198 414 log.go:172] (0xc00010b4a0) (0xc00021a000) Stream added, broadcasting: 5\nI0318 21:24:38.235095 414 log.go:172] (0xc00010b4a0) Reply frame received for 5\nI0318 21:24:38.289239 414 log.go:172] (0xc00010b4a0) Data frame received for 3\nI0318 21:24:38.289284 414 log.go:172] (0xc00020e000) (3) Data frame handling\nI0318 21:24:38.289313 414 log.go:172] (0xc00010b4a0) Data frame received for 5\nI0318 21:24:38.289332 414 log.go:172] (0xc00021a000) (5) Data frame handling\nI0318 21:24:38.289352 414 log.go:172] (0xc00021a000) (5) Data frame sent\nI0318 21:24:38.289364 414 log.go:172] (0xc00010b4a0) Data frame received for 5\nI0318 21:24:38.289374 414 log.go:172] (0xc00021a000) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.78.104 80\nConnection to 10.100.78.104 80 port [tcp/http] succeeded!\nI0318 21:24:38.290531 414 log.go:172] (0xc00010b4a0) Data frame received for 1\nI0318 21:24:38.290563 414 log.go:172] (0xc000693ae0) (1) Data frame handling\nI0318 21:24:38.290582 414 log.go:172] (0xc000693ae0) (1) Data frame sent\nI0318 21:24:38.290596 414 log.go:172] (0xc00010b4a0) (0xc000693ae0) Stream removed, broadcasting: 1\nI0318 21:24:38.290609 414 log.go:172] (0xc00010b4a0) Go away received\nI0318 21:24:38.290943 414 log.go:172] (0xc00010b4a0) (0xc000693ae0) Stream removed, broadcasting: 1\nI0318 21:24:38.290961 414 log.go:172] (0xc00010b4a0) (0xc00020e000) Stream removed, broadcasting: 3\nI0318 21:24:38.290970 414 log.go:172] (0xc00010b4a0) (0xc00021a000) Stream removed, broadcasting: 5\n" Mar 18 21:24:38.293: INFO: stdout: "" Mar 18 21:24:38.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9803 execpod628bx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31727' Mar 18 21:24:38.506: INFO: stderr: "I0318 21:24:38.425858 435 log.go:172] (0xc000a50000) (0xc0005f6640) Create stream\nI0318 21:24:38.425912 435 log.go:172] (0xc000a50000) (0xc0005f6640) Stream added, broadcasting: 1\nI0318 21:24:38.428300 435 log.go:172] (0xc000a50000) Reply frame received for 1\nI0318 21:24:38.428342 435 log.go:172] (0xc000a50000) (0xc0006f5400) Create stream\nI0318 21:24:38.428353 435 log.go:172] (0xc000a50000) (0xc0006f5400) Stream added, broadcasting: 3\nI0318 21:24:38.429083 435 log.go:172] (0xc000a50000) Reply frame received for 3\nI0318 21:24:38.429192 435 log.go:172] (0xc000a50000) (0xc0006f54a0) Create stream\nI0318 21:24:38.429211 435 log.go:172] (0xc000a50000) (0xc0006f54a0) Stream added, broadcasting: 5\nI0318 21:24:38.430085 435 log.go:172] (0xc000a50000) Reply frame received for 5\nI0318 21:24:38.499965 435 log.go:172] (0xc000a50000) Data frame received for 5\nI0318 21:24:38.500001 435 log.go:172] (0xc0006f54a0) (5) Data frame handling\nI0318 21:24:38.500019 435 log.go:172] (0xc0006f54a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 31727\nConnection to 172.17.0.10 31727 port [tcp/31727] succeeded!\nI0318 21:24:38.500196 435 log.go:172] (0xc000a50000) Data frame received for 3\nI0318 21:24:38.500220 435 log.go:172] (0xc0006f5400) (3) Data frame handling\nI0318 21:24:38.500237 435 log.go:172] (0xc000a50000) Data frame received for 5\nI0318 21:24:38.500242 435 log.go:172] (0xc0006f54a0) (5) Data frame handling\nI0318 21:24:38.501709 435 log.go:172] (0xc000a50000) Data frame received for 1\nI0318 21:24:38.501744 435 log.go:172] (0xc0005f6640) (1) Data frame handling\nI0318 21:24:38.501771 435 log.go:172] (0xc0005f6640) (1) Data frame sent\nI0318 21:24:38.501889 435 log.go:172] (0xc000a50000) (0xc0005f6640) Stream removed, broadcasting: 1\nI0318 21:24:38.501916 435 log.go:172] (0xc000a50000) Go away received\nI0318 21:24:38.502344 435 log.go:172] (0xc000a50000) (0xc0005f6640) Stream removed, broadcasting: 1\nI0318 21:24:38.502379 435 log.go:172] (0xc000a50000) (0xc0006f5400) Stream removed, broadcasting: 3\nI0318 21:24:38.502398 435 log.go:172] (0xc000a50000) (0xc0006f54a0) Stream removed, broadcasting: 5\n" Mar 18 21:24:38.506: INFO: stdout: "" Mar 18 21:24:38.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9803 execpod628bx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31727' Mar 18 21:24:38.705: INFO: stderr: "I0318 21:24:38.628765 456 log.go:172] (0xc0000f4f20) (0xc0009d6000) Create stream\nI0318 21:24:38.628847 456 log.go:172] (0xc0000f4f20) (0xc0009d6000) Stream added, broadcasting: 1\nI0318 21:24:38.633041 456 log.go:172] (0xc0000f4f20) Reply frame received for 1\nI0318 21:24:38.633277 456 log.go:172] (0xc0000f4f20) (0xc0006ebb80) Create stream\nI0318 21:24:38.633316 456 log.go:172] (0xc0000f4f20) (0xc0006ebb80) Stream added, broadcasting: 3\nI0318 21:24:38.634870 456 log.go:172] (0xc0000f4f20) Reply frame received for 3\nI0318 21:24:38.634934 456 log.go:172] (0xc0000f4f20) (0xc0009d60a0) Create stream\nI0318 21:24:38.634965 456 log.go:172] (0xc0000f4f20) (0xc0009d60a0) Stream added, broadcasting: 5\nI0318 21:24:38.636624 456 log.go:172] (0xc0000f4f20) Reply frame received for 5\nI0318 21:24:38.700094 456 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0318 21:24:38.700112 456 log.go:172] (0xc0009d60a0) (5) Data frame handling\nI0318 21:24:38.700118 456 log.go:172] (0xc0009d60a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 31727\nConnection to 172.17.0.8 31727 port [tcp/31727] succeeded!\nI0318 21:24:38.700204 456 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0318 21:24:38.700222 456 log.go:172] (0xc0009d60a0) (5) Data frame handling\nI0318 21:24:38.700410 456 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0318 21:24:38.700426 456 log.go:172] (0xc0006ebb80) (3) Data frame handling\nI0318 21:24:38.702235 456 log.go:172] (0xc0000f4f20) Data frame received for 1\nI0318 21:24:38.702261 456 log.go:172] (0xc0009d6000) (1) Data frame handling\nI0318 21:24:38.702272 456 log.go:172] (0xc0009d6000) (1) Data frame sent\nI0318 21:24:38.702283 456 log.go:172] (0xc0000f4f20) (0xc0009d6000) Stream removed, broadcasting: 1\nI0318 21:24:38.702348 456 log.go:172] (0xc0000f4f20) Go away received\nI0318 21:24:38.702576 456 log.go:172] (0xc0000f4f20) (0xc0009d6000) Stream removed, broadcasting: 1\nI0318 21:24:38.702597 456 log.go:172] (0xc0000f4f20) (0xc0006ebb80) Stream removed, broadcasting: 3\nI0318 21:24:38.702607 456 log.go:172] (0xc0000f4f20) (0xc0009d60a0) Stream removed, broadcasting: 5\n" Mar 18 21:24:38.705: INFO: stdout: "" Mar 18 21:24:38.705: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:24:38.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9803" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.648 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":55,"skipped":755,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:24:38.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 18 21:24:38.854: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 21:24:38.865: INFO: Number of nodes with available pods: 0 Mar 18 21:24:38.865: INFO: Node jerma-worker is running more than one daemon pod Mar 18 21:24:39.870: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 21:24:39.874: INFO: Number of nodes with available pods: 0 Mar 18 21:24:39.874: INFO: Node jerma-worker is running more than one daemon pod Mar 18 21:24:40.869: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 21:24:40.873: INFO: Number of nodes with available pods: 0 Mar 18 21:24:40.873: INFO: Node jerma-worker is running more than one daemon pod Mar 18 21:24:41.869: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 21:24:41.872: INFO: Number of nodes with available pods: 0 Mar 18 21:24:41.872: INFO: Node jerma-worker is running more than one daemon pod Mar 18 21:24:42.870: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 21:24:42.874: INFO: Number of nodes with available pods: 2 Mar 18 21:24:42.874: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 18 21:24:42.889: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 21:24:42.895: INFO: Number of nodes with available pods: 2 Mar 18 21:24:42.895: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9617, will wait for the garbage collector to delete the pods Mar 18 21:24:44.075: INFO: Deleting DaemonSet.extensions daemon-set took: 9.795212ms Mar 18 21:24:44.675: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.280866ms Mar 18 21:24:59.278: INFO: Number of nodes with available pods: 0 Mar 18 21:24:59.279: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 21:24:59.281: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9617/daemonsets","resourceVersion":"850250"},"items":null} Mar 18 21:24:59.305: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9617/pods","resourceVersion":"850250"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:24:59.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9617" for this suite. • [SLOW TEST:20.575 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":56,"skipped":759,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:24:59.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8375.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8375.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8375.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8375.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8375.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8375.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 21:25:05.471: INFO: DNS probes using dns-8375/dns-test-75293a30-7503-458b-bd3b-0c7416680e7b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:25:05.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8375" for this suite. • [SLOW TEST:6.467 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":57,"skipped":772,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:25:05.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-9fcf760e-78aa-41c8-a8b2-0eebed05609b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-9fcf760e-78aa-41c8-a8b2-0eebed05609b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:25:14.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5117" for this suite. • [SLOW TEST:8.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":789,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:25:14.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-f69abca7-9335-44d8-b5ef-1cd343a7f611 STEP: Creating a pod to test consume configMaps Mar 18 21:25:14.179: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4394e94-08d5-43c1-a334-88f26f5eca33" in namespace "configmap-5326" to be "success or failure" Mar 18 21:25:14.199: INFO: Pod "pod-configmaps-c4394e94-08d5-43c1-a334-88f26f5eca33": Phase="Pending", Reason="", readiness=false. Elapsed: 19.822695ms Mar 18 21:25:16.204: INFO: Pod "pod-configmaps-c4394e94-08d5-43c1-a334-88f26f5eca33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024385864s Mar 18 21:25:18.207: INFO: Pod "pod-configmaps-c4394e94-08d5-43c1-a334-88f26f5eca33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028249675s STEP: Saw pod success Mar 18 21:25:18.208: INFO: Pod "pod-configmaps-c4394e94-08d5-43c1-a334-88f26f5eca33" satisfied condition "success or failure" Mar 18 21:25:18.210: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c4394e94-08d5-43c1-a334-88f26f5eca33 container configmap-volume-test: STEP: delete the pod Mar 18 21:25:18.227: INFO: Waiting for pod pod-configmaps-c4394e94-08d5-43c1-a334-88f26f5eca33 to disappear Mar 18 21:25:18.231: INFO: Pod pod-configmaps-c4394e94-08d5-43c1-a334-88f26f5eca33 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:25:18.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5326" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":790,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:25:18.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 18 21:25:18.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1466' Mar 18 21:25:18.474: INFO: stderr: "" Mar 18 21:25:18.474: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Mar 18 21:25:18.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1466' Mar 18 21:25:29.518: INFO: stderr: "" Mar 18 21:25:29.518: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:25:29.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1466" for this suite. • [SLOW TEST:11.291 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":60,"skipped":825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:25:29.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 18 21:25:29.606: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:25:29.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8183" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":61,"skipped":856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:25:29.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-7c70713d-b345-4d83-a4f8-96cec4e60577 STEP: Creating a pod to test consume secrets Mar 18 21:25:29.779: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5b67b7e-cc05-4be3-aac7-26893a1bcdbc" in namespace "projected-1944" to be "success or failure" Mar 18 21:25:29.794: INFO: Pod "pod-projected-secrets-f5b67b7e-cc05-4be3-aac7-26893a1bcdbc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.788288ms Mar 18 21:25:31.799: INFO: Pod "pod-projected-secrets-f5b67b7e-cc05-4be3-aac7-26893a1bcdbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020438854s Mar 18 21:25:33.803: INFO: Pod "pod-projected-secrets-f5b67b7e-cc05-4be3-aac7-26893a1bcdbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024779847s STEP: Saw pod success Mar 18 21:25:33.804: INFO: Pod "pod-projected-secrets-f5b67b7e-cc05-4be3-aac7-26893a1bcdbc" satisfied condition "success or failure" Mar 18 21:25:33.807: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f5b67b7e-cc05-4be3-aac7-26893a1bcdbc container projected-secret-volume-test: STEP: delete the pod Mar 18 21:25:33.838: INFO: Waiting for pod pod-projected-secrets-f5b67b7e-cc05-4be3-aac7-26893a1bcdbc to disappear Mar 18 21:25:33.854: INFO: Pod pod-projected-secrets-f5b67b7e-cc05-4be3-aac7-26893a1bcdbc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:25:33.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1944" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:25:33.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 18 21:25:33.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7874' Mar 18 21:25:34.033: INFO: stderr: "" Mar 18 21:25:34.033: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 18 21:25:39.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7874 -o json' Mar 18 21:25:39.176: INFO: stderr: "" Mar 18 21:25:39.176: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-18T21:25:34Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7874\",\n \"resourceVersion\": \"850548\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7874/pods/e2e-test-httpd-pod\",\n \"uid\": \"409761ce-2d5d-4ad3-840b-551ca51fc153\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-zvdxq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-zvdxq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-zvdxq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T21:25:34Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T21:25:36Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T21:25:36Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T21:25:34Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://db5f2b6bae22f4ef19cbe6af46e2e1014885e8542a430a36054b202b4f3f4fc8\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-18T21:25:36Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.234\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.234\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-18T21:25:34Z\"\n }\n}\n" STEP: replace the image in the pod Mar 18 21:25:39.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7874' Mar 18 21:25:39.646: INFO: stderr: "" Mar 18 21:25:39.646: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Mar 18 21:25:39.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7874' Mar 18 21:25:49.240: INFO: stderr: "" Mar 18 21:25:49.240: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:25:49.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7874" for this suite. • [SLOW TEST:15.393 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":63,"skipped":939,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:25:49.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 18 21:25:57.375: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 21:25:57.382: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 21:25:59.382: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 21:25:59.387: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 21:26:01.382: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 21:26:01.389: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 21:26:03.382: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 21:26:03.388: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 21:26:05.382: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 21:26:05.386: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 21:26:07.382: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 21:26:07.387: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 21:26:09.382: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 21:26:09.386: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:09.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3359" for this suite. • [SLOW TEST:20.151 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":949,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:09.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0b91a1c0-e1ba-45d0-a87a-55a888d0a15a STEP: Creating a pod to test consume configMaps Mar 18 21:26:09.492: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9baf29e5-0823-4f8b-a0d0-e4d84e3de616" in namespace "projected-8052" to be "success or failure" Mar 18 21:26:09.496: INFO: Pod "pod-projected-configmaps-9baf29e5-0823-4f8b-a0d0-e4d84e3de616": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822867ms Mar 18 21:26:11.551: INFO: Pod "pod-projected-configmaps-9baf29e5-0823-4f8b-a0d0-e4d84e3de616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059502354s Mar 18 21:26:13.555: INFO: Pod "pod-projected-configmaps-9baf29e5-0823-4f8b-a0d0-e4d84e3de616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063586257s STEP: Saw pod success Mar 18 21:26:13.555: INFO: Pod "pod-projected-configmaps-9baf29e5-0823-4f8b-a0d0-e4d84e3de616" satisfied condition "success or failure" Mar 18 21:26:13.559: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-9baf29e5-0823-4f8b-a0d0-e4d84e3de616 container projected-configmap-volume-test: STEP: delete the pod Mar 18 21:26:13.644: INFO: Waiting for pod pod-projected-configmaps-9baf29e5-0823-4f8b-a0d0-e4d84e3de616 to disappear Mar 18 21:26:13.701: INFO: Pod pod-projected-configmaps-9baf29e5-0823-4f8b-a0d0-e4d84e3de616 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:13.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8052" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":951,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:13.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 18 21:26:13.767: INFO: Waiting up to 5m0s for pod "pod-6e386ee7-4139-4d6d-9647-1979da2a1028" in namespace "emptydir-9387" to be "success or failure" Mar 18 21:26:13.771: INFO: Pod "pod-6e386ee7-4139-4d6d-9647-1979da2a1028": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129359ms Mar 18 21:26:15.775: INFO: Pod "pod-6e386ee7-4139-4d6d-9647-1979da2a1028": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008138097s Mar 18 21:26:17.779: INFO: Pod "pod-6e386ee7-4139-4d6d-9647-1979da2a1028": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012146821s STEP: Saw pod success Mar 18 21:26:17.780: INFO: Pod "pod-6e386ee7-4139-4d6d-9647-1979da2a1028" satisfied condition "success or failure" Mar 18 21:26:17.783: INFO: Trying to get logs from node jerma-worker2 pod pod-6e386ee7-4139-4d6d-9647-1979da2a1028 container test-container: STEP: delete the pod Mar 18 21:26:17.828: INFO: Waiting for pod pod-6e386ee7-4139-4d6d-9647-1979da2a1028 to disappear Mar 18 21:26:17.838: INFO: Pod pod-6e386ee7-4139-4d6d-9647-1979da2a1028 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:17.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9387" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1008,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:17.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:26:17.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-955afadb-e278-44ea-a69d-3d0b7d57fcbe" in namespace "downward-api-8260" to be "success or failure" Mar 18 21:26:17.898: INFO: Pod "downwardapi-volume-955afadb-e278-44ea-a69d-3d0b7d57fcbe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.895699ms Mar 18 21:26:19.900: INFO: Pod "downwardapi-volume-955afadb-e278-44ea-a69d-3d0b7d57fcbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006720405s Mar 18 21:26:21.905: INFO: Pod "downwardapi-volume-955afadb-e278-44ea-a69d-3d0b7d57fcbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01136906s STEP: Saw pod success Mar 18 21:26:21.905: INFO: Pod "downwardapi-volume-955afadb-e278-44ea-a69d-3d0b7d57fcbe" satisfied condition "success or failure" Mar 18 21:26:21.908: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-955afadb-e278-44ea-a69d-3d0b7d57fcbe container client-container: STEP: delete the pod Mar 18 21:26:21.930: INFO: Waiting for pod downwardapi-volume-955afadb-e278-44ea-a69d-3d0b7d57fcbe to disappear Mar 18 21:26:21.933: INFO: Pod downwardapi-volume-955afadb-e278-44ea-a69d-3d0b7d57fcbe no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:21.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8260" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1023,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:21.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 18 21:26:26.567: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3766 pod-service-account-29c6687c-8614-4cda-abee-a8d8ff44a33b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 18 21:26:26.786: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3766 pod-service-account-29c6687c-8614-4cda-abee-a8d8ff44a33b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 18 21:26:27.017: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3766 pod-service-account-29c6687c-8614-4cda-abee-a8d8ff44a33b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:27.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3766" for this suite. • [SLOW TEST:5.309 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":68,"skipped":1027,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:27.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 18 21:26:27.300: INFO: Waiting up to 5m0s for pod "pod-0c7a63ca-2726-4bd5-b7ba-8b3fe5c59b32" in namespace "emptydir-6978" to be "success or failure" Mar 18 21:26:27.316: INFO: Pod "pod-0c7a63ca-2726-4bd5-b7ba-8b3fe5c59b32": Phase="Pending", Reason="", readiness=false. Elapsed: 15.419929ms Mar 18 21:26:29.320: INFO: Pod "pod-0c7a63ca-2726-4bd5-b7ba-8b3fe5c59b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019809034s Mar 18 21:26:31.325: INFO: Pod "pod-0c7a63ca-2726-4bd5-b7ba-8b3fe5c59b32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02448418s STEP: Saw pod success Mar 18 21:26:31.325: INFO: Pod "pod-0c7a63ca-2726-4bd5-b7ba-8b3fe5c59b32" satisfied condition "success or failure" Mar 18 21:26:31.328: INFO: Trying to get logs from node jerma-worker2 pod pod-0c7a63ca-2726-4bd5-b7ba-8b3fe5c59b32 container test-container: STEP: delete the pod Mar 18 21:26:31.360: INFO: Waiting for pod pod-0c7a63ca-2726-4bd5-b7ba-8b3fe5c59b32 to disappear Mar 18 21:26:31.371: INFO: Pod pod-0c7a63ca-2726-4bd5-b7ba-8b3fe5c59b32 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:31.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6978" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1040,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:31.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-a54d14bf-e37e-49ca-94c2-de9a488ebf25 STEP: Creating a pod to test consume secrets Mar 18 21:26:31.456: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bd941b8e-2d1c-4083-a733-106524162b4a" in namespace "projected-5515" to be "success or failure" Mar 18 21:26:31.467: INFO: Pod "pod-projected-secrets-bd941b8e-2d1c-4083-a733-106524162b4a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.438775ms Mar 18 21:26:33.471: INFO: Pod "pod-projected-secrets-bd941b8e-2d1c-4083-a733-106524162b4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014352603s Mar 18 21:26:35.475: INFO: Pod "pod-projected-secrets-bd941b8e-2d1c-4083-a733-106524162b4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018745145s STEP: Saw pod success Mar 18 21:26:35.475: INFO: Pod "pod-projected-secrets-bd941b8e-2d1c-4083-a733-106524162b4a" satisfied condition "success or failure" Mar 18 21:26:35.478: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-bd941b8e-2d1c-4083-a733-106524162b4a container projected-secret-volume-test: STEP: delete the pod Mar 18 21:26:35.498: INFO: Waiting for pod pod-projected-secrets-bd941b8e-2d1c-4083-a733-106524162b4a to disappear Mar 18 21:26:35.503: INFO: Pod pod-projected-secrets-bd941b8e-2d1c-4083-a733-106524162b4a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:35.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5515" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1048,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:35.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:26:35.595: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-24226a90-8ef7-4686-80d3-f209a2cb6f03" in namespace "security-context-test-5709" to be "success or failure" Mar 18 21:26:35.610: INFO: Pod "alpine-nnp-false-24226a90-8ef7-4686-80d3-f209a2cb6f03": Phase="Pending", Reason="", readiness=false. Elapsed: 15.237391ms Mar 18 21:26:37.614: INFO: Pod "alpine-nnp-false-24226a90-8ef7-4686-80d3-f209a2cb6f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019293495s Mar 18 21:26:39.618: INFO: Pod "alpine-nnp-false-24226a90-8ef7-4686-80d3-f209a2cb6f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023599712s Mar 18 21:26:39.618: INFO: Pod "alpine-nnp-false-24226a90-8ef7-4686-80d3-f209a2cb6f03" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:39.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5709" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1057,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:39.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 18 21:26:39.744: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 18 21:26:39.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5728' Mar 18 21:26:40.008: INFO: stderr: "" Mar 18 21:26:40.008: INFO: stdout: "service/agnhost-slave created\n" Mar 18 21:26:40.008: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 18 21:26:40.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5728' Mar 18 21:26:40.273: INFO: stderr: "" Mar 18 21:26:40.273: INFO: stdout: "service/agnhost-master created\n" Mar 18 21:26:40.273: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 18 21:26:40.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5728' Mar 18 21:26:40.540: INFO: stderr: "" Mar 18 21:26:40.540: INFO: stdout: "service/frontend created\n" Mar 18 21:26:40.541: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 18 21:26:40.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5728' Mar 18 21:26:40.784: INFO: stderr: "" Mar 18 21:26:40.784: INFO: stdout: "deployment.apps/frontend created\n" Mar 18 21:26:40.785: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 18 21:26:40.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5728' Mar 18 21:26:41.049: INFO: stderr: "" Mar 18 21:26:41.049: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 18 21:26:41.049: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 18 21:26:41.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5728' Mar 18 21:26:41.303: INFO: stderr: "" Mar 18 21:26:41.303: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 18 21:26:41.303: INFO: Waiting for all frontend pods to be Running. Mar 18 21:26:51.354: INFO: Waiting for frontend to serve content. Mar 18 21:26:51.365: INFO: Trying to add a new entry to the guestbook. Mar 18 21:26:51.376: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 18 21:26:51.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5728' Mar 18 21:26:51.560: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 21:26:51.560: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 18 21:26:51.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5728' Mar 18 21:26:51.711: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 21:26:51.711: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 18 21:26:51.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5728' Mar 18 21:26:51.845: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 21:26:51.845: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 18 21:26:51.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5728' Mar 18 21:26:51.950: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 21:26:51.950: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 18 21:26:51.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5728' Mar 18 21:26:52.057: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 21:26:52.057: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 18 21:26:52.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5728' Mar 18 21:26:52.187: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 21:26:52.187: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:52.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5728" for this suite. • [SLOW TEST:12.560 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":72,"skipped":1074,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:52.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 18 21:26:52.307: INFO: Waiting up to 5m0s for pod "pod-e5f98409-9adf-4c3a-a5ca-3ff88c8d339a" in namespace "emptydir-14" to be "success or failure" Mar 18 21:26:52.329: INFO: Pod "pod-e5f98409-9adf-4c3a-a5ca-3ff88c8d339a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.621033ms Mar 18 21:26:54.333: INFO: Pod "pod-e5f98409-9adf-4c3a-a5ca-3ff88c8d339a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026758823s Mar 18 21:26:56.337: INFO: Pod "pod-e5f98409-9adf-4c3a-a5ca-3ff88c8d339a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030472187s Mar 18 21:26:58.341: INFO: Pod "pod-e5f98409-9adf-4c3a-a5ca-3ff88c8d339a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034666523s STEP: Saw pod success Mar 18 21:26:58.341: INFO: Pod "pod-e5f98409-9adf-4c3a-a5ca-3ff88c8d339a" satisfied condition "success or failure" Mar 18 21:26:58.344: INFO: Trying to get logs from node jerma-worker2 pod pod-e5f98409-9adf-4c3a-a5ca-3ff88c8d339a container test-container: STEP: delete the pod Mar 18 21:26:58.374: INFO: Waiting for pod pod-e5f98409-9adf-4c3a-a5ca-3ff88c8d339a to disappear Mar 18 21:26:58.385: INFO: Pod pod-e5f98409-9adf-4c3a-a5ca-3ff88c8d339a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:26:58.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-14" for this suite. • [SLOW TEST:6.198 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1074,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:26:58.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:26:58.546: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"688bd686-5a1c-49a2-a1ae-d6de195d56a8", Controller:(*bool)(0xc0028598da), BlockOwnerDeletion:(*bool)(0xc0028598db)}} Mar 18 21:26:58.559: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"aa8d3f5c-8d06-4f90-8265-b6972c26d8e2", Controller:(*bool)(0xc001f99922), BlockOwnerDeletion:(*bool)(0xc001f99923)}} Mar 18 21:26:58.612: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a5bef84c-7582-455c-ab16-54553d4d99ab", Controller:(*bool)(0xc002859a92), BlockOwnerDeletion:(*bool)(0xc002859a93)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:27:03.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8602" for this suite. • [SLOW TEST:5.232 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":74,"skipped":1085,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:27:03.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 18 21:27:03.686: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 21:27:03.714: INFO: Waiting for terminating namespaces to be deleted... Mar 18 21:27:03.720: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 18 21:27:03.725: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:27:03.725: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 21:27:03.725: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:27:03.725: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 21:27:03.725: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 18 21:27:03.730: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:27:03.730: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 21:27:03.730: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:27:03.730: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-adac64fa-7fb0-410f-8b9a-19dbffa887d7 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-adac64fa-7fb0-410f-8b9a-19dbffa887d7 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-adac64fa-7fb0-410f-8b9a-19dbffa887d7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:27:11.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8977" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.246 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":75,"skipped":1103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:27:11.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:27:11.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1957" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":76,"skipped":1129,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:27:11.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 18 21:27:17.182: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:27:17.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8775" for this suite. • [SLOW TEST:5.442 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:27:17.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 18 21:27:25.679: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 21:27:25.698: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 21:27:27.699: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 21:27:27.703: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 21:27:29.699: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 21:27:29.703: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 21:27:31.699: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 21:27:31.703: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 21:27:33.699: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 21:27:33.703: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 21:27:35.699: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 21:27:35.703: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 21:27:37.699: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 21:27:37.702: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 21:27:39.699: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 21:27:39.703: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:27:39.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8285" for this suite. • [SLOW TEST:22.294 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:27:39.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 18 21:27:39.810: INFO: Waiting up to 5m0s for pod "pod-090037a6-8bff-4242-91d3-c9beeb25f3a9" in namespace "emptydir-2572" to be "success or failure" Mar 18 21:27:39.812: INFO: Pod "pod-090037a6-8bff-4242-91d3-c9beeb25f3a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252729ms Mar 18 21:27:41.816: INFO: Pod "pod-090037a6-8bff-4242-91d3-c9beeb25f3a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005940037s Mar 18 21:27:43.820: INFO: Pod "pod-090037a6-8bff-4242-91d3-c9beeb25f3a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009912106s STEP: Saw pod success Mar 18 21:27:43.820: INFO: Pod "pod-090037a6-8bff-4242-91d3-c9beeb25f3a9" satisfied condition "success or failure" Mar 18 21:27:43.824: INFO: Trying to get logs from node jerma-worker2 pod pod-090037a6-8bff-4242-91d3-c9beeb25f3a9 container test-container: STEP: delete the pod Mar 18 21:27:43.849: INFO: Waiting for pod pod-090037a6-8bff-4242-91d3-c9beeb25f3a9 to disappear Mar 18 21:27:43.860: INFO: Pod pod-090037a6-8bff-4242-91d3-c9beeb25f3a9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:27:43.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2572" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1191,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:27:43.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:27:55.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5971" for this suite. • [SLOW TEST:11.293 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":80,"skipped":1205,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:27:55.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 18 21:27:55.223: INFO: Waiting up to 5m0s for pod "downward-api-f3bd8483-ebb2-4c1f-b2c1-c8cb6fea48f0" in namespace "downward-api-4832" to be "success or failure" Mar 18 21:27:55.226: INFO: Pod "downward-api-f3bd8483-ebb2-4c1f-b2c1-c8cb6fea48f0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.20423ms Mar 18 21:27:57.235: INFO: Pod "downward-api-f3bd8483-ebb2-4c1f-b2c1-c8cb6fea48f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012326263s Mar 18 21:27:59.239: INFO: Pod "downward-api-f3bd8483-ebb2-4c1f-b2c1-c8cb6fea48f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016176169s STEP: Saw pod success Mar 18 21:27:59.239: INFO: Pod "downward-api-f3bd8483-ebb2-4c1f-b2c1-c8cb6fea48f0" satisfied condition "success or failure" Mar 18 21:27:59.242: INFO: Trying to get logs from node jerma-worker pod downward-api-f3bd8483-ebb2-4c1f-b2c1-c8cb6fea48f0 container dapi-container: STEP: delete the pod Mar 18 21:27:59.257: INFO: Waiting for pod downward-api-f3bd8483-ebb2-4c1f-b2c1-c8cb6fea48f0 to disappear Mar 18 21:27:59.273: INFO: Pod downward-api-f3bd8483-ebb2-4c1f-b2c1-c8cb6fea48f0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:27:59.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4832" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1216,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:27:59.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:27:59.356: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.797685ms) Mar 18 21:27:59.359: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.387662ms) Mar 18 21:27:59.363: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.456384ms) Mar 18 21:27:59.366: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.165767ms) Mar 18 21:27:59.369: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.231996ms) Mar 18 21:27:59.372: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.72756ms) Mar 18 21:27:59.375: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.717521ms) Mar 18 21:27:59.378: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.072038ms) Mar 18 21:27:59.381: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.001019ms) Mar 18 21:27:59.384: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.531436ms) Mar 18 21:27:59.388: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.292731ms) Mar 18 21:27:59.391: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.51279ms) Mar 18 21:27:59.394: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.036324ms) Mar 18 21:27:59.397: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.064673ms) Mar 18 21:27:59.400: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.042118ms) Mar 18 21:27:59.404: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.461339ms) Mar 18 21:27:59.407: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.342681ms) Mar 18 21:27:59.411: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.546422ms) Mar 18 21:27:59.419: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 8.387022ms) Mar 18 21:27:59.424: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.751751ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:27:59.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2727" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":82,"skipped":1233,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:27:59.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:27:59.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29abdde8-98fa-4b22-8491-0ba3fbb4e65f" in namespace "downward-api-7801" to be "success or failure" Mar 18 21:27:59.574: INFO: Pod "downwardapi-volume-29abdde8-98fa-4b22-8491-0ba3fbb4e65f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018763ms Mar 18 21:28:01.581: INFO: Pod "downwardapi-volume-29abdde8-98fa-4b22-8491-0ba3fbb4e65f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011223683s Mar 18 21:28:03.585: INFO: Pod "downwardapi-volume-29abdde8-98fa-4b22-8491-0ba3fbb4e65f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015733976s STEP: Saw pod success Mar 18 21:28:03.586: INFO: Pod "downwardapi-volume-29abdde8-98fa-4b22-8491-0ba3fbb4e65f" satisfied condition "success or failure" Mar 18 21:28:03.589: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-29abdde8-98fa-4b22-8491-0ba3fbb4e65f container client-container: STEP: delete the pod Mar 18 21:28:03.644: INFO: Waiting for pod downwardapi-volume-29abdde8-98fa-4b22-8491-0ba3fbb4e65f to disappear Mar 18 21:28:03.650: INFO: Pod downwardapi-volume-29abdde8-98fa-4b22-8491-0ba3fbb4e65f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:28:03.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7801" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:28:03.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 18 21:28:03.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 18 21:28:03.802: INFO: stderr: "" Mar 18 21:28:03.802: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:28:03.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7383" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":84,"skipped":1308,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:28:03.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:28:03.903: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7271cb9-88c9-4665-941c-2654d4109b86" in namespace "projected-1152" to be "success or failure" Mar 18 21:28:03.908: INFO: Pod "downwardapi-volume-e7271cb9-88c9-4665-941c-2654d4109b86": Phase="Pending", Reason="", readiness=false. Elapsed: 5.376102ms Mar 18 21:28:05.913: INFO: Pod "downwardapi-volume-e7271cb9-88c9-4665-941c-2654d4109b86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010241243s Mar 18 21:28:07.917: INFO: Pod "downwardapi-volume-e7271cb9-88c9-4665-941c-2654d4109b86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014373989s STEP: Saw pod success Mar 18 21:28:07.917: INFO: Pod "downwardapi-volume-e7271cb9-88c9-4665-941c-2654d4109b86" satisfied condition "success or failure" Mar 18 21:28:07.920: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e7271cb9-88c9-4665-941c-2654d4109b86 container client-container: STEP: delete the pod Mar 18 21:28:07.940: INFO: Waiting for pod downwardapi-volume-e7271cb9-88c9-4665-941c-2654d4109b86 to disappear Mar 18 21:28:07.944: INFO: Pod downwardapi-volume-e7271cb9-88c9-4665-941c-2654d4109b86 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:28:07.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1152" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1310,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:28:07.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 18 21:28:12.049: INFO: Pod pod-hostip-a1c69778-dc4c-44a9-9e67-2fd652b121bb has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:28:12.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4462" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1325,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:28:12.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:28:12.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fafb4dd-b2f2-48bc-b7cf-aed6d02ecb8a" in namespace "downward-api-8623" to be "success or failure" Mar 18 21:28:12.152: INFO: Pod "downwardapi-volume-2fafb4dd-b2f2-48bc-b7cf-aed6d02ecb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.595748ms Mar 18 21:28:14.156: INFO: Pod "downwardapi-volume-2fafb4dd-b2f2-48bc-b7cf-aed6d02ecb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036690625s Mar 18 21:28:16.161: INFO: Pod "downwardapi-volume-2fafb4dd-b2f2-48bc-b7cf-aed6d02ecb8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040733071s STEP: Saw pod success Mar 18 21:28:16.161: INFO: Pod "downwardapi-volume-2fafb4dd-b2f2-48bc-b7cf-aed6d02ecb8a" satisfied condition "success or failure" Mar 18 21:28:16.164: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2fafb4dd-b2f2-48bc-b7cf-aed6d02ecb8a container client-container: STEP: delete the pod Mar 18 21:28:16.199: INFO: Waiting for pod downwardapi-volume-2fafb4dd-b2f2-48bc-b7cf-aed6d02ecb8a to disappear Mar 18 21:28:16.209: INFO: Pod downwardapi-volume-2fafb4dd-b2f2-48bc-b7cf-aed6d02ecb8a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:28:16.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8623" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1327,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:28:16.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:28:16.306: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:28:17.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1543" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":88,"skipped":1334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:28:17.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-7e369eec-1193-4fae-9252-eea1603fe9d5 in namespace container-probe-5922 Mar 18 21:28:21.644: INFO: Started pod test-webserver-7e369eec-1193-4fae-9252-eea1603fe9d5 in namespace container-probe-5922 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 21:28:21.648: INFO: Initial restart count of pod test-webserver-7e369eec-1193-4fae-9252-eea1603fe9d5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:32:22.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5922" for this suite. • [SLOW TEST:244.763 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1392,"failed":0} [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:32:22.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5200.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5200.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 21:32:28.569: INFO: DNS probes using dns-test-2ab77cad-d9c7-4aef-ac63-d04807bd89be succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5200.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5200.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 21:32:34.673: INFO: File wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 21:32:34.676: INFO: File jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 21:32:34.676: INFO: Lookups using dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb failed for: [wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local] Mar 18 21:32:39.681: INFO: File wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 21:32:39.685: INFO: File jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 21:32:39.685: INFO: Lookups using dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb failed for: [wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local] Mar 18 21:32:44.681: INFO: File wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 21:32:44.686: INFO: File jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 21:32:44.686: INFO: Lookups using dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb failed for: [wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local] Mar 18 21:32:49.695: INFO: File wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains '' instead of 'bar.example.com.' Mar 18 21:32:49.699: INFO: File jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 21:32:49.699: INFO: Lookups using dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb failed for: [wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local] Mar 18 21:32:54.680: INFO: File wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 21:32:54.684: INFO: File jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local from pod dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 21:32:54.684: INFO: Lookups using dns-5200/dns-test-043a988d-3af5-473b-a24b-4133924263eb failed for: [wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local] Mar 18 21:32:59.700: INFO: DNS probes using dns-test-043a988d-3af5-473b-a24b-4133924263eb succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5200.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5200.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5200.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5200.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 21:33:06.251: INFO: DNS probes using dns-test-bb5d99e2-e812-4d53-b4a6-d617bd126e23 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:33:06.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5200" for this suite. • [SLOW TEST:44.037 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":90,"skipped":1392,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:33:06.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:33:06.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 18 21:33:06.781: INFO: stderr: "" Mar 18 21:33:06.781: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:31:51Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:33:06.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5385" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":91,"skipped":1393,"failed":0} SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:33:06.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 18 21:33:06.896: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:33:19.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9126" for this suite. • [SLOW TEST:12.450 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1395,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:33:19.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:33:30.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8641" for this suite. • [SLOW TEST:11.118 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":93,"skipped":1404,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:33:30.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-vp5w STEP: Creating a pod to test atomic-volume-subpath Mar 18 21:33:30.451: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vp5w" in namespace "subpath-5709" to be "success or failure" Mar 18 21:33:30.469: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Pending", Reason="", readiness=false. Elapsed: 17.926971ms Mar 18 21:33:32.501: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049508397s Mar 18 21:33:34.506: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 4.053954805s Mar 18 21:33:36.510: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 6.058106494s Mar 18 21:33:38.514: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 8.062344361s Mar 18 21:33:40.518: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 10.066517906s Mar 18 21:33:42.522: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 12.070589385s Mar 18 21:33:44.526: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 14.074636314s Mar 18 21:33:46.531: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 16.079317097s Mar 18 21:33:48.535: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 18.083228193s Mar 18 21:33:50.538: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 20.086888857s Mar 18 21:33:52.543: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Running", Reason="", readiness=true. Elapsed: 22.091630593s Mar 18 21:33:54.547: INFO: Pod "pod-subpath-test-projected-vp5w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.095519412s STEP: Saw pod success Mar 18 21:33:54.547: INFO: Pod "pod-subpath-test-projected-vp5w" satisfied condition "success or failure" Mar 18 21:33:54.550: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-vp5w container test-container-subpath-projected-vp5w: STEP: delete the pod Mar 18 21:33:54.615: INFO: Waiting for pod pod-subpath-test-projected-vp5w to disappear Mar 18 21:33:54.638: INFO: Pod pod-subpath-test-projected-vp5w no longer exists STEP: Deleting pod pod-subpath-test-projected-vp5w Mar 18 21:33:54.638: INFO: Deleting pod "pod-subpath-test-projected-vp5w" in namespace "subpath-5709" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:33:54.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5709" for this suite. • [SLOW TEST:24.291 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":94,"skipped":1416,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:33:54.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 18 21:33:57.756: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:33:57.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3348" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:33:57.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 18 21:34:01.947: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:34:01.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3959" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1495,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:34:01.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-264cad27-323c-49d8-84a8-5f040803f2d8 STEP: Creating a pod to test consume secrets Mar 18 21:34:02.087: INFO: Waiting up to 5m0s for pod "pod-secrets-d6a4f5f1-f64c-41ed-bf48-c332b61f30cd" in namespace "secrets-7969" to be "success or failure" Mar 18 21:34:02.103: INFO: Pod "pod-secrets-d6a4f5f1-f64c-41ed-bf48-c332b61f30cd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.823742ms Mar 18 21:34:04.106: INFO: Pod "pod-secrets-d6a4f5f1-f64c-41ed-bf48-c332b61f30cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019240893s Mar 18 21:34:06.110: INFO: Pod "pod-secrets-d6a4f5f1-f64c-41ed-bf48-c332b61f30cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02335411s STEP: Saw pod success Mar 18 21:34:06.110: INFO: Pod "pod-secrets-d6a4f5f1-f64c-41ed-bf48-c332b61f30cd" satisfied condition "success or failure" Mar 18 21:34:06.113: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d6a4f5f1-f64c-41ed-bf48-c332b61f30cd container secret-volume-test: STEP: delete the pod Mar 18 21:34:06.149: INFO: Waiting for pod pod-secrets-d6a4f5f1-f64c-41ed-bf48-c332b61f30cd to disappear Mar 18 21:34:06.164: INFO: Pod pod-secrets-d6a4f5f1-f64c-41ed-bf48-c332b61f30cd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:34:06.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7969" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1504,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:34:06.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 18 21:34:06.260: INFO: Waiting up to 5m0s for pod "client-containers-32ce37a9-a509-40f6-bcaa-3916dc2d96d6" in namespace "containers-6398" to be "success or failure" Mar 18 21:34:06.273: INFO: Pod "client-containers-32ce37a9-a509-40f6-bcaa-3916dc2d96d6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.62067ms Mar 18 21:34:08.295: INFO: Pod "client-containers-32ce37a9-a509-40f6-bcaa-3916dc2d96d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03512059s Mar 18 21:34:10.300: INFO: Pod "client-containers-32ce37a9-a509-40f6-bcaa-3916dc2d96d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039845607s STEP: Saw pod success Mar 18 21:34:10.300: INFO: Pod "client-containers-32ce37a9-a509-40f6-bcaa-3916dc2d96d6" satisfied condition "success or failure" Mar 18 21:34:10.303: INFO: Trying to get logs from node jerma-worker pod client-containers-32ce37a9-a509-40f6-bcaa-3916dc2d96d6 container test-container: STEP: delete the pod Mar 18 21:34:10.323: INFO: Waiting for pod client-containers-32ce37a9-a509-40f6-bcaa-3916dc2d96d6 to disappear Mar 18 21:34:10.327: INFO: Pod client-containers-32ce37a9-a509-40f6-bcaa-3916dc2d96d6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:34:10.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6398" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1516,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:34:10.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:34:10.867: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:34:12.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164050, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164050, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164050, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164050, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:34:15.919: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:34:15.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8058-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:34:17.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3490" for this suite. STEP: Destroying namespace "webhook-3490-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.884 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":99,"skipped":1521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:34:17.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:34:17.268: INFO: Creating ReplicaSet my-hostname-basic-c391f881-c439-4796-87cf-e152e7a766eb Mar 18 21:34:17.283: INFO: Pod name my-hostname-basic-c391f881-c439-4796-87cf-e152e7a766eb: Found 0 pods out of 1 Mar 18 21:34:22.292: INFO: Pod name my-hostname-basic-c391f881-c439-4796-87cf-e152e7a766eb: Found 1 pods out of 1 Mar 18 21:34:22.292: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c391f881-c439-4796-87cf-e152e7a766eb" is running Mar 18 21:34:22.298: INFO: Pod "my-hostname-basic-c391f881-c439-4796-87cf-e152e7a766eb-dk4q8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 21:34:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 21:34:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 21:34:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 21:34:17 +0000 UTC Reason: Message:}]) Mar 18 21:34:22.298: INFO: Trying to dial the pod Mar 18 21:34:27.310: INFO: Controller my-hostname-basic-c391f881-c439-4796-87cf-e152e7a766eb: Got expected result from replica 1 [my-hostname-basic-c391f881-c439-4796-87cf-e152e7a766eb-dk4q8]: "my-hostname-basic-c391f881-c439-4796-87cf-e152e7a766eb-dk4q8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:34:27.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-115" for this suite. • [SLOW TEST:10.099 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":100,"skipped":1559,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:34:27.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:34:27.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da946fdd-7823-4102-ba83-8756a245bace" in namespace "projected-5706" to be "success or failure" Mar 18 21:34:27.446: INFO: Pod "downwardapi-volume-da946fdd-7823-4102-ba83-8756a245bace": Phase="Pending", Reason="", readiness=false. Elapsed: 21.83213ms Mar 18 21:34:29.450: INFO: Pod "downwardapi-volume-da946fdd-7823-4102-ba83-8756a245bace": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026626792s Mar 18 21:34:31.455: INFO: Pod "downwardapi-volume-da946fdd-7823-4102-ba83-8756a245bace": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031455209s STEP: Saw pod success Mar 18 21:34:31.455: INFO: Pod "downwardapi-volume-da946fdd-7823-4102-ba83-8756a245bace" satisfied condition "success or failure" Mar 18 21:34:31.459: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-da946fdd-7823-4102-ba83-8756a245bace container client-container: STEP: delete the pod Mar 18 21:34:31.508: INFO: Waiting for pod downwardapi-volume-da946fdd-7823-4102-ba83-8756a245bace to disappear Mar 18 21:34:31.511: INFO: Pod downwardapi-volume-da946fdd-7823-4102-ba83-8756a245bace no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:34:31.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5706" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:34:31.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 18 21:34:36.105: INFO: Successfully updated pod "pod-update-19acfe7b-2c27-452f-8c37-5ff505cf9b43" STEP: verifying the updated pod is in kubernetes Mar 18 21:34:36.113: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:34:36.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2621" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1625,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:34:36.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:34:36.210: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-831d170a-3785-49b7-990b-a6c1616b6b0c" in namespace "security-context-test-4916" to be "success or failure" Mar 18 21:34:36.219: INFO: Pod "busybox-readonly-false-831d170a-3785-49b7-990b-a6c1616b6b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.802372ms Mar 18 21:34:38.223: INFO: Pod "busybox-readonly-false-831d170a-3785-49b7-990b-a6c1616b6b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012799155s Mar 18 21:34:40.228: INFO: Pod "busybox-readonly-false-831d170a-3785-49b7-990b-a6c1616b6b0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017561116s Mar 18 21:34:40.228: INFO: Pod "busybox-readonly-false-831d170a-3785-49b7-990b-a6c1616b6b0c" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:34:40.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4916" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1634,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:34:40.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:34:40.367: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 18 21:34:40.379: INFO: Number of nodes with available pods: 0 Mar 18 21:34:40.379: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 18 21:34:40.421: INFO: Number of nodes with available pods: 0 Mar 18 21:34:40.421: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:41.426: INFO: Number of nodes with available pods: 0 Mar 18 21:34:41.426: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:42.571: INFO: Number of nodes with available pods: 0 Mar 18 21:34:42.571: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:43.425: INFO: Number of nodes with available pods: 1 Mar 18 21:34:43.425: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 18 21:34:43.454: INFO: Number of nodes with available pods: 1 Mar 18 21:34:43.454: INFO: Number of running nodes: 0, number of available pods: 1 Mar 18 21:34:44.463: INFO: Number of nodes with available pods: 0 Mar 18 21:34:44.463: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 18 21:34:44.491: INFO: Number of nodes with available pods: 0 Mar 18 21:34:44.491: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:45.494: INFO: Number of nodes with available pods: 0 Mar 18 21:34:45.494: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:46.505: INFO: Number of nodes with available pods: 0 Mar 18 21:34:46.505: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:47.496: INFO: Number of nodes with available pods: 0 Mar 18 21:34:47.496: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:48.494: INFO: Number of nodes with available pods: 0 Mar 18 21:34:48.494: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:49.529: INFO: Number of nodes with available pods: 0 Mar 18 21:34:49.529: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:50.496: INFO: Number of nodes with available pods: 0 Mar 18 21:34:50.496: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:51.495: INFO: Number of nodes with available pods: 0 Mar 18 21:34:51.495: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 21:34:52.495: INFO: Number of nodes with available pods: 1 Mar 18 21:34:52.495: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3849, will wait for the garbage collector to delete the pods Mar 18 21:34:52.565: INFO: Deleting DaemonSet.extensions daemon-set took: 6.435952ms Mar 18 21:34:52.865: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.275799ms Mar 18 21:34:59.569: INFO: Number of nodes with available pods: 0 Mar 18 21:34:59.569: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 21:34:59.572: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3849/daemonsets","resourceVersion":"853591"},"items":null} Mar 18 21:34:59.575: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3849/pods","resourceVersion":"853591"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:34:59.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3849" for this suite. • [SLOW TEST:19.403 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":104,"skipped":1704,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:34:59.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:35:03.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4900" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1716,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:35:03.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 18 21:35:08.368: INFO: Successfully updated pod "annotationupdate1b81a32a-77d6-437d-9407-ba1931199fad" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:35:10.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5660" for this suite. • [SLOW TEST:6.688 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1718,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:35:10.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:35:10.511: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c9b4c89-30ff-4116-a0eb-3fd1e8d91242" in namespace "downward-api-5405" to be "success or failure" Mar 18 21:35:10.514: INFO: Pod "downwardapi-volume-3c9b4c89-30ff-4116-a0eb-3fd1e8d91242": Phase="Pending", Reason="", readiness=false. Elapsed: 3.00557ms Mar 18 21:35:12.519: INFO: Pod "downwardapi-volume-3c9b4c89-30ff-4116-a0eb-3fd1e8d91242": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007307066s Mar 18 21:35:14.523: INFO: Pod "downwardapi-volume-3c9b4c89-30ff-4116-a0eb-3fd1e8d91242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011508694s STEP: Saw pod success Mar 18 21:35:14.523: INFO: Pod "downwardapi-volume-3c9b4c89-30ff-4116-a0eb-3fd1e8d91242" satisfied condition "success or failure" Mar 18 21:35:14.526: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3c9b4c89-30ff-4116-a0eb-3fd1e8d91242 container client-container: STEP: delete the pod Mar 18 21:35:14.575: INFO: Waiting for pod downwardapi-volume-3c9b4c89-30ff-4116-a0eb-3fd1e8d91242 to disappear Mar 18 21:35:14.580: INFO: Pod downwardapi-volume-3c9b4c89-30ff-4116-a0eb-3fd1e8d91242 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:35:14.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5405" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1731,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:35:14.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4821 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4821 STEP: creating replication controller externalsvc in namespace services-4821 I0318 21:35:14.737225 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4821, replica count: 2 I0318 21:35:17.787677 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 21:35:20.787970 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 18 21:35:20.825: INFO: Creating new exec pod Mar 18 21:35:24.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4821 execpod7t4mk -- /bin/sh -x -c nslookup clusterip-service' Mar 18 21:35:27.500: INFO: stderr: "I0318 21:35:27.411407 989 log.go:172] (0xc000888b00) (0xc0008820a0) Create stream\nI0318 21:35:27.411449 989 log.go:172] (0xc000888b00) (0xc0008820a0) Stream added, broadcasting: 1\nI0318 21:35:27.415374 989 log.go:172] (0xc000888b00) Reply frame received for 1\nI0318 21:35:27.415424 989 log.go:172] (0xc000888b00) (0xc000882140) Create stream\nI0318 21:35:27.415436 989 log.go:172] (0xc000888b00) (0xc000882140) Stream added, broadcasting: 3\nI0318 21:35:27.416434 989 log.go:172] (0xc000888b00) Reply frame received for 3\nI0318 21:35:27.416474 989 log.go:172] (0xc000888b00) (0xc0008680a0) Create stream\nI0318 21:35:27.416484 989 log.go:172] (0xc000888b00) (0xc0008680a0) Stream added, broadcasting: 5\nI0318 21:35:27.417556 989 log.go:172] (0xc000888b00) Reply frame received for 5\nI0318 21:35:27.485249 989 log.go:172] (0xc000888b00) Data frame received for 5\nI0318 21:35:27.485299 989 log.go:172] (0xc0008680a0) (5) Data frame handling\nI0318 21:35:27.485329 989 log.go:172] (0xc0008680a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0318 21:35:27.490685 989 log.go:172] (0xc000888b00) Data frame received for 3\nI0318 21:35:27.490720 989 log.go:172] (0xc000882140) (3) Data frame handling\nI0318 21:35:27.490751 989 log.go:172] (0xc000882140) (3) Data frame sent\nI0318 21:35:27.491376 989 log.go:172] (0xc000888b00) Data frame received for 3\nI0318 21:35:27.491394 989 log.go:172] (0xc000882140) (3) Data frame handling\nI0318 21:35:27.491411 989 log.go:172] (0xc000882140) (3) Data frame sent\nI0318 21:35:27.492066 989 log.go:172] (0xc000888b00) Data frame received for 5\nI0318 21:35:27.492212 989 log.go:172] (0xc0008680a0) (5) Data frame handling\nI0318 21:35:27.492252 989 log.go:172] (0xc000888b00) Data frame received for 3\nI0318 21:35:27.492274 989 log.go:172] (0xc000882140) (3) Data frame handling\nI0318 21:35:27.494598 989 log.go:172] (0xc000888b00) Data frame received for 1\nI0318 21:35:27.494836 989 log.go:172] (0xc0008820a0) (1) Data frame handling\nI0318 21:35:27.494943 989 log.go:172] (0xc0008820a0) (1) Data frame sent\nI0318 21:35:27.495015 989 log.go:172] (0xc000888b00) (0xc0008820a0) Stream removed, broadcasting: 1\nI0318 21:35:27.495050 989 log.go:172] (0xc000888b00) Go away received\nI0318 21:35:27.495910 989 log.go:172] (0xc000888b00) (0xc0008820a0) Stream removed, broadcasting: 1\nI0318 21:35:27.495932 989 log.go:172] (0xc000888b00) (0xc000882140) Stream removed, broadcasting: 3\nI0318 21:35:27.495946 989 log.go:172] (0xc000888b00) (0xc0008680a0) Stream removed, broadcasting: 5\n" Mar 18 21:35:27.500: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4821.svc.cluster.local\tcanonical name = externalsvc.services-4821.svc.cluster.local.\nName:\texternalsvc.services-4821.svc.cluster.local\nAddress: 10.107.150.67\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4821, will wait for the garbage collector to delete the pods Mar 18 21:35:27.560: INFO: Deleting ReplicationController externalsvc took: 7.055132ms Mar 18 21:35:27.860: INFO: Terminating ReplicationController externalsvc pods took: 300.2421ms Mar 18 21:35:39.600: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:35:39.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4821" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.058 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":108,"skipped":1758,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:35:39.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 18 21:35:39.713: INFO: Waiting up to 5m0s for pod "pod-4635e8ae-81fd-4cdd-ac2f-6bc60443ae91" in namespace "emptydir-7046" to be "success or failure" Mar 18 21:35:39.717: INFO: Pod "pod-4635e8ae-81fd-4cdd-ac2f-6bc60443ae91": Phase="Pending", Reason="", readiness=false. Elapsed: 3.442195ms Mar 18 21:35:41.736: INFO: Pod "pod-4635e8ae-81fd-4cdd-ac2f-6bc60443ae91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022354243s Mar 18 21:35:43.740: INFO: Pod "pod-4635e8ae-81fd-4cdd-ac2f-6bc60443ae91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026338455s STEP: Saw pod success Mar 18 21:35:43.740: INFO: Pod "pod-4635e8ae-81fd-4cdd-ac2f-6bc60443ae91" satisfied condition "success or failure" Mar 18 21:35:43.743: INFO: Trying to get logs from node jerma-worker pod pod-4635e8ae-81fd-4cdd-ac2f-6bc60443ae91 container test-container: STEP: delete the pod Mar 18 21:35:43.796: INFO: Waiting for pod pod-4635e8ae-81fd-4cdd-ac2f-6bc60443ae91 to disappear Mar 18 21:35:43.801: INFO: Pod pod-4635e8ae-81fd-4cdd-ac2f-6bc60443ae91 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:35:43.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7046" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1760,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:35:43.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:35:47.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4996" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1761,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:35:47.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e311a128-3e02-4358-9324-7e49eaa7ccc3 STEP: Creating a pod to test consume secrets Mar 18 21:35:48.001: INFO: Waiting up to 5m0s for pod "pod-secrets-a31c68bc-3b79-49f5-a944-39554fe1ed97" in namespace "secrets-3347" to be "success or failure" Mar 18 21:35:48.023: INFO: Pod "pod-secrets-a31c68bc-3b79-49f5-a944-39554fe1ed97": Phase="Pending", Reason="", readiness=false. Elapsed: 21.219166ms Mar 18 21:35:50.026: INFO: Pod "pod-secrets-a31c68bc-3b79-49f5-a944-39554fe1ed97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024944596s Mar 18 21:35:52.031: INFO: Pod "pod-secrets-a31c68bc-3b79-49f5-a944-39554fe1ed97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029053147s STEP: Saw pod success Mar 18 21:35:52.031: INFO: Pod "pod-secrets-a31c68bc-3b79-49f5-a944-39554fe1ed97" satisfied condition "success or failure" Mar 18 21:35:52.033: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a31c68bc-3b79-49f5-a944-39554fe1ed97 container secret-env-test: STEP: delete the pod Mar 18 21:35:52.075: INFO: Waiting for pod pod-secrets-a31c68bc-3b79-49f5-a944-39554fe1ed97 to disappear Mar 18 21:35:52.121: INFO: Pod pod-secrets-a31c68bc-3b79-49f5-a944-39554fe1ed97 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:35:52.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3347" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1772,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:35:52.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 18 21:35:52.990: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 18 21:35:55.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164152, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164152, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164153, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164152, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:35:58.042: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:35:58.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:35:59.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7509" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.064 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":112,"skipped":1773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:35:59.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:35:59.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7266' Mar 18 21:35:59.572: INFO: stderr: "" Mar 18 21:35:59.573: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 18 21:35:59.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7266' Mar 18 21:36:00.066: INFO: stderr: "" Mar 18 21:36:00.066: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 18 21:36:01.071: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:36:01.071: INFO: Found 0 / 1 Mar 18 21:36:02.071: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:36:02.071: INFO: Found 0 / 1 Mar 18 21:36:03.071: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:36:03.071: INFO: Found 1 / 1 Mar 18 21:36:03.071: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 18 21:36:03.076: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:36:03.076: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 18 21:36:03.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-gnzl6 --namespace=kubectl-7266' Mar 18 21:36:03.184: INFO: stderr: "" Mar 18 21:36:03.184: INFO: stdout: "Name: agnhost-master-gnzl6\nNamespace: kubectl-7266\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Wed, 18 Mar 2020 21:35:59 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.38\nIPs:\n IP: 10.244.2.38\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://be3c29c47fb908db61d7b726af1eb93efd65e3dd2fe195114758be1eb266ece9\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 18 Mar 2020 21:36:01 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-wqcvd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-wqcvd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-wqcvd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-7266/agnhost-master-gnzl6 to jerma-worker2\n Normal Pulled 3s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker2 Started container agnhost-master\n" Mar 18 21:36:03.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7266' Mar 18 21:36:03.302: INFO: stderr: "" Mar 18 21:36:03.302: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7266\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-gnzl6\n" Mar 18 21:36:03.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7266' Mar 18 21:36:03.412: INFO: stderr: "" Mar 18 21:36:03.412: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7266\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.99.83.179\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.38:6379\nSession Affinity: None\nEvents: \n" Mar 18 21:36:03.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 18 21:36:03.543: INFO: stderr: "" Mar 18 21:36:03.543: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Wed, 18 Mar 2020 21:35:58 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 18 Mar 2020 21:33:29 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 18 Mar 2020 21:33:29 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 18 Mar 2020 21:33:29 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 18 Mar 2020 21:33:29 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3d3h\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3d3h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d3h\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 3d3h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 3d3h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 3d3h\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d3h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3d3h\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d3h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 18 21:36:03.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7266' Mar 18 21:36:03.653: INFO: stderr: "" Mar 18 21:36:03.653: INFO: stdout: "Name: kubectl-7266\nLabels: e2e-framework=kubectl\n e2e-run=89b75577-6e5a-4c8f-87a0-f4404043876d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:36:03.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7266" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":113,"skipped":1813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:36:03.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:36:04.426: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:36:06.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164164, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164164, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164164, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164164, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:36:09.529: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:36:10.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7049" for this suite. STEP: Destroying namespace "webhook-7049-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.447 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":114,"skipped":1841,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:36:10.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 18 21:36:10.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8396' Mar 18 21:36:10.590: INFO: stderr: "" Mar 18 21:36:10.591: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 21:36:10.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8396' Mar 18 21:36:10.959: INFO: stderr: "" Mar 18 21:36:10.959: INFO: stdout: "update-demo-nautilus-5ldts update-demo-nautilus-t6b4x " Mar 18 21:36:10.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ldts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:11.050: INFO: stderr: "" Mar 18 21:36:11.050: INFO: stdout: "" Mar 18 21:36:11.050: INFO: update-demo-nautilus-5ldts is created but not running Mar 18 21:36:16.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8396' Mar 18 21:36:16.150: INFO: stderr: "" Mar 18 21:36:16.150: INFO: stdout: "update-demo-nautilus-5ldts update-demo-nautilus-t6b4x " Mar 18 21:36:16.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ldts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:16.242: INFO: stderr: "" Mar 18 21:36:16.242: INFO: stdout: "true" Mar 18 21:36:16.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ldts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:16.344: INFO: stderr: "" Mar 18 21:36:16.344: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 21:36:16.344: INFO: validating pod update-demo-nautilus-5ldts Mar 18 21:36:16.349: INFO: got data: { "image": "nautilus.jpg" } Mar 18 21:36:16.349: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 21:36:16.349: INFO: update-demo-nautilus-5ldts is verified up and running Mar 18 21:36:16.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t6b4x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:16.448: INFO: stderr: "" Mar 18 21:36:16.448: INFO: stdout: "true" Mar 18 21:36:16.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t6b4x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:16.542: INFO: stderr: "" Mar 18 21:36:16.542: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 21:36:16.542: INFO: validating pod update-demo-nautilus-t6b4x Mar 18 21:36:16.555: INFO: got data: { "image": "nautilus.jpg" } Mar 18 21:36:16.555: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 21:36:16.555: INFO: update-demo-nautilus-t6b4x is verified up and running STEP: scaling down the replication controller Mar 18 21:36:16.557: INFO: scanned /root for discovery docs: Mar 18 21:36:16.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8396' Mar 18 21:36:17.738: INFO: stderr: "" Mar 18 21:36:17.738: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 21:36:17.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8396' Mar 18 21:36:17.830: INFO: stderr: "" Mar 18 21:36:17.830: INFO: stdout: "update-demo-nautilus-5ldts update-demo-nautilus-t6b4x " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 18 21:36:22.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8396' Mar 18 21:36:22.953: INFO: stderr: "" Mar 18 21:36:22.953: INFO: stdout: "update-demo-nautilus-5ldts update-demo-nautilus-t6b4x " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 18 21:36:27.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8396' Mar 18 21:36:28.056: INFO: stderr: "" Mar 18 21:36:28.056: INFO: stdout: "update-demo-nautilus-5ldts update-demo-nautilus-t6b4x " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 18 21:36:33.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8396' Mar 18 21:36:33.157: INFO: stderr: "" Mar 18 21:36:33.157: INFO: stdout: "update-demo-nautilus-5ldts " Mar 18 21:36:33.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ldts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:33.248: INFO: stderr: "" Mar 18 21:36:33.248: INFO: stdout: "true" Mar 18 21:36:33.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ldts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:33.339: INFO: stderr: "" Mar 18 21:36:33.339: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 21:36:33.339: INFO: validating pod update-demo-nautilus-5ldts Mar 18 21:36:33.343: INFO: got data: { "image": "nautilus.jpg" } Mar 18 21:36:33.343: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 21:36:33.343: INFO: update-demo-nautilus-5ldts is verified up and running STEP: scaling up the replication controller Mar 18 21:36:33.344: INFO: scanned /root for discovery docs: Mar 18 21:36:33.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8396' Mar 18 21:36:34.457: INFO: stderr: "" Mar 18 21:36:34.457: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 21:36:34.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8396' Mar 18 21:36:34.563: INFO: stderr: "" Mar 18 21:36:34.563: INFO: stdout: "update-demo-nautilus-5ldts update-demo-nautilus-7s2fk " Mar 18 21:36:34.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ldts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:34.650: INFO: stderr: "" Mar 18 21:36:34.650: INFO: stdout: "true" Mar 18 21:36:34.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ldts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:34.760: INFO: stderr: "" Mar 18 21:36:34.760: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 21:36:34.760: INFO: validating pod update-demo-nautilus-5ldts Mar 18 21:36:34.763: INFO: got data: { "image": "nautilus.jpg" } Mar 18 21:36:34.763: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 21:36:34.763: INFO: update-demo-nautilus-5ldts is verified up and running Mar 18 21:36:34.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s2fk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:34.965: INFO: stderr: "" Mar 18 21:36:34.965: INFO: stdout: "" Mar 18 21:36:34.965: INFO: update-demo-nautilus-7s2fk is created but not running Mar 18 21:36:39.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8396' Mar 18 21:36:40.061: INFO: stderr: "" Mar 18 21:36:40.061: INFO: stdout: "update-demo-nautilus-5ldts update-demo-nautilus-7s2fk " Mar 18 21:36:40.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ldts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:40.150: INFO: stderr: "" Mar 18 21:36:40.150: INFO: stdout: "true" Mar 18 21:36:40.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ldts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:40.237: INFO: stderr: "" Mar 18 21:36:40.237: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 21:36:40.237: INFO: validating pod update-demo-nautilus-5ldts Mar 18 21:36:40.241: INFO: got data: { "image": "nautilus.jpg" } Mar 18 21:36:40.241: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 21:36:40.241: INFO: update-demo-nautilus-5ldts is verified up and running Mar 18 21:36:40.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s2fk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:40.325: INFO: stderr: "" Mar 18 21:36:40.325: INFO: stdout: "true" Mar 18 21:36:40.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s2fk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8396' Mar 18 21:36:40.414: INFO: stderr: "" Mar 18 21:36:40.414: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 21:36:40.414: INFO: validating pod update-demo-nautilus-7s2fk Mar 18 21:36:40.417: INFO: got data: { "image": "nautilus.jpg" } Mar 18 21:36:40.417: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 21:36:40.417: INFO: update-demo-nautilus-7s2fk is verified up and running STEP: using delete to clean up resources Mar 18 21:36:40.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8396' Mar 18 21:36:40.516: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 21:36:40.516: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 18 21:36:40.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8396' Mar 18 21:36:40.618: INFO: stderr: "No resources found in kubectl-8396 namespace.\n" Mar 18 21:36:40.618: INFO: stdout: "" Mar 18 21:36:40.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8396 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 21:36:40.710: INFO: stderr: "" Mar 18 21:36:40.710: INFO: stdout: "update-demo-nautilus-5ldts\nupdate-demo-nautilus-7s2fk\n" Mar 18 21:36:41.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8396' Mar 18 21:36:41.430: INFO: stderr: "No resources found in kubectl-8396 namespace.\n" Mar 18 21:36:41.430: INFO: stdout: "" Mar 18 21:36:41.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8396 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 21:36:41.523: INFO: stderr: "" Mar 18 21:36:41.523: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:36:41.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8396" for this suite. • [SLOW TEST:31.423 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":115,"skipped":1858,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:36:41.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:36:45.672: INFO: Waiting up to 5m0s for pod "client-envvars-5c8b1f78-b0b6-4d70-b7cc-a6721fb3259f" in namespace "pods-2214" to be "success or failure" Mar 18 21:36:45.716: INFO: Pod "client-envvars-5c8b1f78-b0b6-4d70-b7cc-a6721fb3259f": Phase="Pending", Reason="", readiness=false. Elapsed: 44.170339ms Mar 18 21:36:47.720: INFO: Pod "client-envvars-5c8b1f78-b0b6-4d70-b7cc-a6721fb3259f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048346788s Mar 18 21:36:49.724: INFO: Pod "client-envvars-5c8b1f78-b0b6-4d70-b7cc-a6721fb3259f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052200212s STEP: Saw pod success Mar 18 21:36:49.724: INFO: Pod "client-envvars-5c8b1f78-b0b6-4d70-b7cc-a6721fb3259f" satisfied condition "success or failure" Mar 18 21:36:49.728: INFO: Trying to get logs from node jerma-worker pod client-envvars-5c8b1f78-b0b6-4d70-b7cc-a6721fb3259f container env3cont: STEP: delete the pod Mar 18 21:36:49.747: INFO: Waiting for pod client-envvars-5c8b1f78-b0b6-4d70-b7cc-a6721fb3259f to disappear Mar 18 21:36:49.770: INFO: Pod client-envvars-5c8b1f78-b0b6-4d70-b7cc-a6721fb3259f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:36:49.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2214" for this suite. • [SLOW TEST:8.248 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1868,"failed":0} [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:36:49.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Mar 18 21:36:49.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-9479 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 18 21:36:49.929: INFO: stderr: "" Mar 18 21:36:49.929: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 18 21:36:49.929: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 18 21:36:49.929: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9479" to be "running and ready, or succeeded" Mar 18 21:36:49.940: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.785697ms Mar 18 21:36:51.944: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014758749s Mar 18 21:36:53.948: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.018969846s Mar 18 21:36:53.948: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 18 21:36:53.948: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 18 21:36:53.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9479' Mar 18 21:36:54.052: INFO: stderr: "" Mar 18 21:36:54.052: INFO: stdout: "I0318 21:36:52.003217 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/cxxt 560\nI0318 21:36:52.203474 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/q7v5 242\nI0318 21:36:52.403420 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/r6h 538\nI0318 21:36:52.603408 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/qqct 582\nI0318 21:36:52.803425 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/nbj 527\nI0318 21:36:53.003418 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/xks 596\nI0318 21:36:53.203494 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/xlg6 425\nI0318 21:36:53.403408 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/ljl 409\nI0318 21:36:53.603566 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/fnk 388\nI0318 21:36:53.803418 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5s4 265\nI0318 21:36:54.003515 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/fnn 218\n" STEP: limiting log lines Mar 18 21:36:54.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9479 --tail=1' Mar 18 21:36:54.164: INFO: stderr: "" Mar 18 21:36:54.164: INFO: stdout: "I0318 21:36:54.003515 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/fnn 218\n" Mar 18 21:36:54.164: INFO: got output "I0318 21:36:54.003515 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/fnn 218\n" STEP: limiting log bytes Mar 18 21:36:54.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9479 --limit-bytes=1' Mar 18 21:36:54.266: INFO: stderr: "" Mar 18 21:36:54.266: INFO: stdout: "I" Mar 18 21:36:54.266: INFO: got output "I" STEP: exposing timestamps Mar 18 21:36:54.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9479 --tail=1 --timestamps' Mar 18 21:36:54.375: INFO: stderr: "" Mar 18 21:36:54.375: INFO: stdout: "2020-03-18T21:36:54.20374871Z I0318 21:36:54.203537 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/fx84 552\n" Mar 18 21:36:54.375: INFO: got output "2020-03-18T21:36:54.20374871Z I0318 21:36:54.203537 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/fx84 552\n" STEP: restricting to a time range Mar 18 21:36:56.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9479 --since=1s' Mar 18 21:36:56.981: INFO: stderr: "" Mar 18 21:36:56.982: INFO: stdout: "I0318 21:36:56.003439 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/9dfn 295\nI0318 21:36:56.203446 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/4rj 595\nI0318 21:36:56.403429 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/xrs 389\nI0318 21:36:56.603406 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/kzc 452\nI0318 21:36:56.803403 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/zkz5 254\n" Mar 18 21:36:56.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9479 --since=24h' Mar 18 21:36:57.091: INFO: stderr: "" Mar 18 21:36:57.091: INFO: stdout: "I0318 21:36:52.003217 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/cxxt 560\nI0318 21:36:52.203474 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/q7v5 242\nI0318 21:36:52.403420 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/r6h 538\nI0318 21:36:52.603408 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/qqct 582\nI0318 21:36:52.803425 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/nbj 527\nI0318 21:36:53.003418 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/xks 596\nI0318 21:36:53.203494 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/xlg6 425\nI0318 21:36:53.403408 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/ljl 409\nI0318 21:36:53.603566 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/fnk 388\nI0318 21:36:53.803418 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5s4 265\nI0318 21:36:54.003515 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/fnn 218\nI0318 21:36:54.203537 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/fx84 552\nI0318 21:36:54.403419 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/b5r4 402\nI0318 21:36:54.603427 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/gkhk 241\nI0318 21:36:54.803392 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/46x 561\nI0318 21:36:55.003417 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/26q 462\nI0318 21:36:55.203416 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/qx2 213\nI0318 21:36:55.403417 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/t5h 495\nI0318 21:36:55.603467 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/2w8 451\nI0318 21:36:55.803407 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/ct47 230\nI0318 21:36:56.003439 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/9dfn 295\nI0318 21:36:56.203446 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/4rj 595\nI0318 21:36:56.403429 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/xrs 389\nI0318 21:36:56.603406 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/kzc 452\nI0318 21:36:56.803403 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/zkz5 254\nI0318 21:36:57.003447 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/cff 578\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Mar 18 21:36:57.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9479' Mar 18 21:36:59.844: INFO: stderr: "" Mar 18 21:36:59.844: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:36:59.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9479" for this suite. • [SLOW TEST:10.073 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":117,"skipped":1868,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:36:59.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 18 21:36:59.915: INFO: Waiting up to 5m0s for pod "client-containers-f3e37d4d-ed1e-414b-80ef-80fd64fabebe" in namespace "containers-11" to be "success or failure" Mar 18 21:36:59.956: INFO: Pod "client-containers-f3e37d4d-ed1e-414b-80ef-80fd64fabebe": Phase="Pending", Reason="", readiness=false. Elapsed: 40.790748ms Mar 18 21:37:01.961: INFO: Pod "client-containers-f3e37d4d-ed1e-414b-80ef-80fd64fabebe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045574965s Mar 18 21:37:03.964: INFO: Pod "client-containers-f3e37d4d-ed1e-414b-80ef-80fd64fabebe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049474227s STEP: Saw pod success Mar 18 21:37:03.964: INFO: Pod "client-containers-f3e37d4d-ed1e-414b-80ef-80fd64fabebe" satisfied condition "success or failure" Mar 18 21:37:03.967: INFO: Trying to get logs from node jerma-worker pod client-containers-f3e37d4d-ed1e-414b-80ef-80fd64fabebe container test-container: STEP: delete the pod Mar 18 21:37:04.000: INFO: Waiting for pod client-containers-f3e37d4d-ed1e-414b-80ef-80fd64fabebe to disappear Mar 18 21:37:04.008: INFO: Pod client-containers-f3e37d4d-ed1e-414b-80ef-80fd64fabebe no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:37:04.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-11" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1870,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:37:04.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:37:04.408: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:37:06.417: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164224, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164224, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164224, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164224, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:37:09.447: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 18 21:37:09.486: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:37:09.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9608" for this suite. STEP: Destroying namespace "webhook-9608-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.643 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":119,"skipped":1882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:37:09.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 18 21:37:09.706: INFO: namespace kubectl-5023 Mar 18 21:37:09.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5023' Mar 18 21:37:09.997: INFO: stderr: "" Mar 18 21:37:09.997: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 18 21:37:11.002: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:37:11.002: INFO: Found 0 / 1 Mar 18 21:37:12.028: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:37:12.028: INFO: Found 0 / 1 Mar 18 21:37:13.001: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:37:13.001: INFO: Found 1 / 1 Mar 18 21:37:13.001: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 18 21:37:13.004: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:37:13.004: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 18 21:37:13.004: INFO: wait on agnhost-master startup in kubectl-5023 Mar 18 21:37:13.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-5cg4h agnhost-master --namespace=kubectl-5023' Mar 18 21:37:13.122: INFO: stderr: "" Mar 18 21:37:13.123: INFO: stdout: "Paused\n" STEP: exposing RC Mar 18 21:37:13.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5023' Mar 18 21:37:13.255: INFO: stderr: "" Mar 18 21:37:13.255: INFO: stdout: "service/rm2 exposed\n" Mar 18 21:37:13.264: INFO: Service rm2 in namespace kubectl-5023 found. STEP: exposing service Mar 18 21:37:15.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5023' Mar 18 21:37:15.393: INFO: stderr: "" Mar 18 21:37:15.393: INFO: stdout: "service/rm3 exposed\n" Mar 18 21:37:15.401: INFO: Service rm3 in namespace kubectl-5023 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:37:17.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5023" for this suite. • [SLOW TEST:7.757 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":120,"skipped":1956,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:37:17.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 18 21:37:17.480: INFO: >>> kubeConfig: /root/.kube/config Mar 18 21:37:19.462: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:37:29.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5726" for this suite. • [SLOW TEST:12.514 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":121,"skipped":1967,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:37:29.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 18 21:37:30.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4059 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 18 21:37:33.013: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0318 21:37:32.948777 2066 log.go:172] (0xc00010c6e0) (0xc000728140) Create stream\nI0318 21:37:32.948825 2066 log.go:172] (0xc00010c6e0) (0xc000728140) Stream added, broadcasting: 1\nI0318 21:37:32.951156 2066 log.go:172] (0xc00010c6e0) Reply frame received for 1\nI0318 21:37:32.951199 2066 log.go:172] (0xc00010c6e0) (0xc0007ac000) Create stream\nI0318 21:37:32.951209 2066 log.go:172] (0xc00010c6e0) (0xc0007ac000) Stream added, broadcasting: 3\nI0318 21:37:32.952137 2066 log.go:172] (0xc00010c6e0) Reply frame received for 3\nI0318 21:37:32.952192 2066 log.go:172] (0xc00010c6e0) (0xc0006579a0) Create stream\nI0318 21:37:32.952207 2066 log.go:172] (0xc00010c6e0) (0xc0006579a0) Stream added, broadcasting: 5\nI0318 21:37:32.953508 2066 log.go:172] (0xc00010c6e0) Reply frame received for 5\nI0318 21:37:32.953549 2066 log.go:172] (0xc00010c6e0) (0xc000657a40) Create stream\nI0318 21:37:32.953559 2066 log.go:172] (0xc00010c6e0) (0xc000657a40) Stream added, broadcasting: 7\nI0318 21:37:32.954456 2066 log.go:172] (0xc00010c6e0) Reply frame received for 7\nI0318 21:37:32.954603 2066 log.go:172] (0xc0007ac000) (3) Writing data frame\nI0318 21:37:32.954727 2066 log.go:172] (0xc0007ac000) (3) Writing data frame\nI0318 21:37:32.955557 2066 log.go:172] (0xc00010c6e0) Data frame received for 5\nI0318 21:37:32.955577 2066 log.go:172] (0xc0006579a0) (5) Data frame handling\nI0318 21:37:32.955589 2066 log.go:172] (0xc0006579a0) (5) Data frame sent\nI0318 21:37:32.956130 2066 log.go:172] (0xc00010c6e0) Data frame received for 5\nI0318 21:37:32.956170 2066 log.go:172] (0xc0006579a0) (5) Data frame handling\nI0318 21:37:32.956205 2066 log.go:172] (0xc0006579a0) (5) Data frame sent\nI0318 21:37:32.991491 2066 log.go:172] (0xc00010c6e0) Data frame received for 5\nI0318 21:37:32.991527 2066 log.go:172] (0xc0006579a0) (5) Data frame handling\nI0318 21:37:32.991778 2066 log.go:172] (0xc00010c6e0) Data frame received for 7\nI0318 21:37:32.991810 2066 log.go:172] (0xc000657a40) (7) Data frame handling\nI0318 21:37:32.992150 2066 log.go:172] (0xc00010c6e0) Data frame received for 1\nI0318 21:37:32.992172 2066 log.go:172] (0xc000728140) (1) Data frame handling\nI0318 21:37:32.992194 2066 log.go:172] (0xc000728140) (1) Data frame sent\nI0318 21:37:32.992215 2066 log.go:172] (0xc00010c6e0) (0xc000728140) Stream removed, broadcasting: 1\nI0318 21:37:32.992332 2066 log.go:172] (0xc00010c6e0) (0xc0007ac000) Stream removed, broadcasting: 3\nI0318 21:37:32.992372 2066 log.go:172] (0xc00010c6e0) Go away received\nI0318 21:37:32.992664 2066 log.go:172] (0xc00010c6e0) (0xc000728140) Stream removed, broadcasting: 1\nI0318 21:37:32.992687 2066 log.go:172] (0xc00010c6e0) (0xc0007ac000) Stream removed, broadcasting: 3\nI0318 21:37:32.992700 2066 log.go:172] (0xc00010c6e0) (0xc0006579a0) Stream removed, broadcasting: 5\nI0318 21:37:32.992715 2066 log.go:172] (0xc00010c6e0) (0xc000657a40) Stream removed, broadcasting: 7\n" Mar 18 21:37:33.013: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:37:35.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4059" for this suite. • [SLOW TEST:5.093 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1944 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":122,"skipped":1972,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:37:35.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0318 21:37:45.163110 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 21:37:45.163: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:37:45.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9420" for this suite. • [SLOW TEST:10.144 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":123,"skipped":1981,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:37:45.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 18 21:37:55.318: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:55.318: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:55.354033 6 log.go:172] (0xc002193340) (0xc001fc6c80) Create stream I0318 21:37:55.354063 6 log.go:172] (0xc002193340) (0xc001fc6c80) Stream added, broadcasting: 1 I0318 21:37:55.355924 6 log.go:172] (0xc002193340) Reply frame received for 1 I0318 21:37:55.355961 6 log.go:172] (0xc002193340) (0xc001bb5400) Create stream I0318 21:37:55.355971 6 log.go:172] (0xc002193340) (0xc001bb5400) Stream added, broadcasting: 3 I0318 21:37:55.357013 6 log.go:172] (0xc002193340) Reply frame received for 3 I0318 21:37:55.357053 6 log.go:172] (0xc002193340) (0xc001bb54a0) Create stream I0318 21:37:55.357071 6 log.go:172] (0xc002193340) (0xc001bb54a0) Stream added, broadcasting: 5 I0318 21:37:55.358156 6 log.go:172] (0xc002193340) Reply frame received for 5 I0318 21:37:55.421066 6 log.go:172] (0xc002193340) Data frame received for 5 I0318 21:37:55.421103 6 log.go:172] (0xc002193340) Data frame received for 3 I0318 21:37:55.421252 6 log.go:172] (0xc001bb5400) (3) Data frame handling I0318 21:37:55.421270 6 log.go:172] (0xc001bb5400) (3) Data frame sent I0318 21:37:55.421288 6 log.go:172] (0xc002193340) Data frame received for 3 I0318 21:37:55.421305 6 log.go:172] (0xc001bb5400) (3) Data frame handling I0318 21:37:55.421327 6 log.go:172] (0xc001bb54a0) (5) Data frame handling I0318 21:37:55.424700 6 log.go:172] (0xc002193340) Data frame received for 1 I0318 21:37:55.424757 6 log.go:172] (0xc001fc6c80) (1) Data frame handling I0318 21:37:55.424790 6 log.go:172] (0xc001fc6c80) (1) Data frame sent I0318 21:37:55.424893 6 log.go:172] (0xc002193340) (0xc001fc6c80) Stream removed, broadcasting: 1 I0318 21:37:55.425055 6 log.go:172] (0xc002193340) (0xc001fc6c80) Stream removed, broadcasting: 1 I0318 21:37:55.425229 6 log.go:172] (0xc002193340) (0xc001bb5400) Stream removed, broadcasting: 3 I0318 21:37:55.425546 6 log.go:172] (0xc002193340) Go away received I0318 21:37:55.425623 6 log.go:172] (0xc002193340) (0xc001bb54a0) Stream removed, broadcasting: 5 Mar 18 21:37:55.425: INFO: Exec stderr: "" Mar 18 21:37:55.425: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:55.425: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:55.450546 6 log.go:172] (0xc000bbc630) (0xc001bb5900) Create stream I0318 21:37:55.450583 6 log.go:172] (0xc000bbc630) (0xc001bb5900) Stream added, broadcasting: 1 I0318 21:37:55.452420 6 log.go:172] (0xc000bbc630) Reply frame received for 1 I0318 21:37:55.452457 6 log.go:172] (0xc000bbc630) (0xc000e43ae0) Create stream I0318 21:37:55.452470 6 log.go:172] (0xc000bbc630) (0xc000e43ae0) Stream added, broadcasting: 3 I0318 21:37:55.453650 6 log.go:172] (0xc000bbc630) Reply frame received for 3 I0318 21:37:55.453692 6 log.go:172] (0xc000bbc630) (0xc0019eeb40) Create stream I0318 21:37:55.453703 6 log.go:172] (0xc000bbc630) (0xc0019eeb40) Stream added, broadcasting: 5 I0318 21:37:55.454671 6 log.go:172] (0xc000bbc630) Reply frame received for 5 I0318 21:37:55.525569 6 log.go:172] (0xc000bbc630) Data frame received for 5 I0318 21:37:55.525753 6 log.go:172] (0xc0019eeb40) (5) Data frame handling I0318 21:37:55.525805 6 log.go:172] (0xc000bbc630) Data frame received for 3 I0318 21:37:55.525830 6 log.go:172] (0xc000e43ae0) (3) Data frame handling I0318 21:37:55.525855 6 log.go:172] (0xc000e43ae0) (3) Data frame sent I0318 21:37:55.525931 6 log.go:172] (0xc000bbc630) Data frame received for 3 I0318 21:37:55.525973 6 log.go:172] (0xc000e43ae0) (3) Data frame handling I0318 21:37:55.528343 6 log.go:172] (0xc000bbc630) Data frame received for 1 I0318 21:37:55.528378 6 log.go:172] (0xc001bb5900) (1) Data frame handling I0318 21:37:55.528390 6 log.go:172] (0xc001bb5900) (1) Data frame sent I0318 21:37:55.528402 6 log.go:172] (0xc000bbc630) (0xc001bb5900) Stream removed, broadcasting: 1 I0318 21:37:55.528412 6 log.go:172] (0xc000bbc630) Go away received I0318 21:37:55.528525 6 log.go:172] (0xc000bbc630) (0xc001bb5900) Stream removed, broadcasting: 1 I0318 21:37:55.528545 6 log.go:172] (0xc000bbc630) (0xc000e43ae0) Stream removed, broadcasting: 3 I0318 21:37:55.528554 6 log.go:172] (0xc000bbc630) (0xc0019eeb40) Stream removed, broadcasting: 5 Mar 18 21:37:55.528: INFO: Exec stderr: "" Mar 18 21:37:55.528: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:55.528: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:55.559491 6 log.go:172] (0xc0007c68f0) (0xc0019ef0e0) Create stream I0318 21:37:55.559530 6 log.go:172] (0xc0007c68f0) (0xc0019ef0e0) Stream added, broadcasting: 1 I0318 21:37:55.561489 6 log.go:172] (0xc0007c68f0) Reply frame received for 1 I0318 21:37:55.561527 6 log.go:172] (0xc0007c68f0) (0xc000e43ea0) Create stream I0318 21:37:55.561541 6 log.go:172] (0xc0007c68f0) (0xc000e43ea0) Stream added, broadcasting: 3 I0318 21:37:55.562638 6 log.go:172] (0xc0007c68f0) Reply frame received for 3 I0318 21:37:55.562684 6 log.go:172] (0xc0007c68f0) (0xc001008140) Create stream I0318 21:37:55.562699 6 log.go:172] (0xc0007c68f0) (0xc001008140) Stream added, broadcasting: 5 I0318 21:37:55.563780 6 log.go:172] (0xc0007c68f0) Reply frame received for 5 I0318 21:37:55.624052 6 log.go:172] (0xc0007c68f0) Data frame received for 5 I0318 21:37:55.624103 6 log.go:172] (0xc0007c68f0) Data frame received for 3 I0318 21:37:55.624142 6 log.go:172] (0xc000e43ea0) (3) Data frame handling I0318 21:37:55.624156 6 log.go:172] (0xc000e43ea0) (3) Data frame sent I0318 21:37:55.624167 6 log.go:172] (0xc0007c68f0) Data frame received for 3 I0318 21:37:55.624175 6 log.go:172] (0xc000e43ea0) (3) Data frame handling I0318 21:37:55.624199 6 log.go:172] (0xc001008140) (5) Data frame handling I0318 21:37:55.625721 6 log.go:172] (0xc0007c68f0) Data frame received for 1 I0318 21:37:55.625747 6 log.go:172] (0xc0019ef0e0) (1) Data frame handling I0318 21:37:55.625771 6 log.go:172] (0xc0019ef0e0) (1) Data frame sent I0318 21:37:55.625824 6 log.go:172] (0xc0007c68f0) (0xc0019ef0e0) Stream removed, broadcasting: 1 I0318 21:37:55.625851 6 log.go:172] (0xc0007c68f0) Go away received I0318 21:37:55.625948 6 log.go:172] (0xc0007c68f0) (0xc0019ef0e0) Stream removed, broadcasting: 1 I0318 21:37:55.625977 6 log.go:172] (0xc0007c68f0) (0xc000e43ea0) Stream removed, broadcasting: 3 I0318 21:37:55.625995 6 log.go:172] (0xc0007c68f0) (0xc001008140) Stream removed, broadcasting: 5 Mar 18 21:37:55.626: INFO: Exec stderr: "" Mar 18 21:37:55.626: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:55.626: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:55.662814 6 log.go:172] (0xc001fd80b0) (0xc0010083c0) Create stream I0318 21:37:55.662903 6 log.go:172] (0xc001fd80b0) (0xc0010083c0) Stream added, broadcasting: 1 I0318 21:37:55.665416 6 log.go:172] (0xc001fd80b0) Reply frame received for 1 I0318 21:37:55.665484 6 log.go:172] (0xc001fd80b0) (0xc000e43f40) Create stream I0318 21:37:55.665500 6 log.go:172] (0xc001fd80b0) (0xc000e43f40) Stream added, broadcasting: 3 I0318 21:37:55.666467 6 log.go:172] (0xc001fd80b0) Reply frame received for 3 I0318 21:37:55.666525 6 log.go:172] (0xc001fd80b0) (0xc0019ef2c0) Create stream I0318 21:37:55.666571 6 log.go:172] (0xc001fd80b0) (0xc0019ef2c0) Stream added, broadcasting: 5 I0318 21:37:55.667553 6 log.go:172] (0xc001fd80b0) Reply frame received for 5 I0318 21:37:55.736751 6 log.go:172] (0xc001fd80b0) Data frame received for 3 I0318 21:37:55.736821 6 log.go:172] (0xc000e43f40) (3) Data frame handling I0318 21:37:55.736834 6 log.go:172] (0xc000e43f40) (3) Data frame sent I0318 21:37:55.736843 6 log.go:172] (0xc001fd80b0) Data frame received for 3 I0318 21:37:55.736847 6 log.go:172] (0xc000e43f40) (3) Data frame handling I0318 21:37:55.736878 6 log.go:172] (0xc001fd80b0) Data frame received for 5 I0318 21:37:55.736915 6 log.go:172] (0xc0019ef2c0) (5) Data frame handling I0318 21:37:55.738201 6 log.go:172] (0xc001fd80b0) Data frame received for 1 I0318 21:37:55.738230 6 log.go:172] (0xc0010083c0) (1) Data frame handling I0318 21:37:55.738244 6 log.go:172] (0xc0010083c0) (1) Data frame sent I0318 21:37:55.738271 6 log.go:172] (0xc001fd80b0) (0xc0010083c0) Stream removed, broadcasting: 1 I0318 21:37:55.738473 6 log.go:172] (0xc001fd80b0) Go away received I0318 21:37:55.738596 6 log.go:172] (0xc001fd80b0) (0xc0010083c0) Stream removed, broadcasting: 1 I0318 21:37:55.738623 6 log.go:172] (0xc001fd80b0) (0xc000e43f40) Stream removed, broadcasting: 3 I0318 21:37:55.738646 6 log.go:172] (0xc001fd80b0) (0xc0019ef2c0) Stream removed, broadcasting: 5 Mar 18 21:37:55.738: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 18 21:37:55.738: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:55.738: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:55.769907 6 log.go:172] (0xc0007c6f20) (0xc0019ef540) Create stream I0318 21:37:55.769952 6 log.go:172] (0xc0007c6f20) (0xc0019ef540) Stream added, broadcasting: 1 I0318 21:37:55.772747 6 log.go:172] (0xc0007c6f20) Reply frame received for 1 I0318 21:37:55.772786 6 log.go:172] (0xc0007c6f20) (0xc0019ef680) Create stream I0318 21:37:55.772802 6 log.go:172] (0xc0007c6f20) (0xc0019ef680) Stream added, broadcasting: 3 I0318 21:37:55.774033 6 log.go:172] (0xc0007c6f20) Reply frame received for 3 I0318 21:37:55.774090 6 log.go:172] (0xc0007c6f20) (0xc0019ef860) Create stream I0318 21:37:55.774106 6 log.go:172] (0xc0007c6f20) (0xc0019ef860) Stream added, broadcasting: 5 I0318 21:37:55.775110 6 log.go:172] (0xc0007c6f20) Reply frame received for 5 I0318 21:37:55.839211 6 log.go:172] (0xc0007c6f20) Data frame received for 5 I0318 21:37:55.839283 6 log.go:172] (0xc0019ef860) (5) Data frame handling I0318 21:37:55.839329 6 log.go:172] (0xc0007c6f20) Data frame received for 3 I0318 21:37:55.839370 6 log.go:172] (0xc0019ef680) (3) Data frame handling I0318 21:37:55.839413 6 log.go:172] (0xc0019ef680) (3) Data frame sent I0318 21:37:55.839809 6 log.go:172] (0xc0007c6f20) Data frame received for 3 I0318 21:37:55.839846 6 log.go:172] (0xc0019ef680) (3) Data frame handling I0318 21:37:55.841100 6 log.go:172] (0xc0007c6f20) Data frame received for 1 I0318 21:37:55.841299 6 log.go:172] (0xc0019ef540) (1) Data frame handling I0318 21:37:55.841334 6 log.go:172] (0xc0019ef540) (1) Data frame sent I0318 21:37:55.841365 6 log.go:172] (0xc0007c6f20) (0xc0019ef540) Stream removed, broadcasting: 1 I0318 21:37:55.841569 6 log.go:172] (0xc0007c6f20) (0xc0019ef540) Stream removed, broadcasting: 1 I0318 21:37:55.841596 6 log.go:172] (0xc0007c6f20) (0xc0019ef680) Stream removed, broadcasting: 3 I0318 21:37:55.841700 6 log.go:172] (0xc0007c6f20) Go away received I0318 21:37:55.841821 6 log.go:172] (0xc0007c6f20) (0xc0019ef860) Stream removed, broadcasting: 5 Mar 18 21:37:55.841: INFO: Exec stderr: "" Mar 18 21:37:55.841: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:55.841: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:55.868209 6 log.go:172] (0xc0007c7550) (0xc0019efa40) Create stream I0318 21:37:55.868229 6 log.go:172] (0xc0007c7550) (0xc0019efa40) Stream added, broadcasting: 1 I0318 21:37:55.871201 6 log.go:172] (0xc0007c7550) Reply frame received for 1 I0318 21:37:55.871254 6 log.go:172] (0xc0007c7550) (0xc0019efc20) Create stream I0318 21:37:55.871280 6 log.go:172] (0xc0007c7550) (0xc0019efc20) Stream added, broadcasting: 3 I0318 21:37:55.872314 6 log.go:172] (0xc0007c7550) Reply frame received for 3 I0318 21:37:55.872350 6 log.go:172] (0xc0007c7550) (0xc0028b2000) Create stream I0318 21:37:55.872362 6 log.go:172] (0xc0007c7550) (0xc0028b2000) Stream added, broadcasting: 5 I0318 21:37:55.873432 6 log.go:172] (0xc0007c7550) Reply frame received for 5 I0318 21:37:55.952923 6 log.go:172] (0xc0007c7550) Data frame received for 5 I0318 21:37:55.952973 6 log.go:172] (0xc0028b2000) (5) Data frame handling I0318 21:37:55.953094 6 log.go:172] (0xc0007c7550) Data frame received for 3 I0318 21:37:55.953108 6 log.go:172] (0xc0019efc20) (3) Data frame handling I0318 21:37:55.953211 6 log.go:172] (0xc0019efc20) (3) Data frame sent I0318 21:37:55.953223 6 log.go:172] (0xc0007c7550) Data frame received for 3 I0318 21:37:55.953270 6 log.go:172] (0xc0019efc20) (3) Data frame handling I0318 21:37:55.954984 6 log.go:172] (0xc0007c7550) Data frame received for 1 I0318 21:37:55.954998 6 log.go:172] (0xc0019efa40) (1) Data frame handling I0318 21:37:55.955011 6 log.go:172] (0xc0019efa40) (1) Data frame sent I0318 21:37:55.955090 6 log.go:172] (0xc0007c7550) (0xc0019efa40) Stream removed, broadcasting: 1 I0318 21:37:55.955219 6 log.go:172] (0xc0007c7550) Go away received I0318 21:37:55.955244 6 log.go:172] (0xc0007c7550) (0xc0019efa40) Stream removed, broadcasting: 1 I0318 21:37:55.955341 6 log.go:172] (0xc0007c7550) (0xc0019efc20) Stream removed, broadcasting: 3 I0318 21:37:55.955410 6 log.go:172] (0xc0007c7550) (0xc0028b2000) Stream removed, broadcasting: 5 Mar 18 21:37:55.955: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 18 21:37:55.955: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:55.955: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:55.987575 6 log.go:172] (0xc0007c7ad0) (0xc0019efd60) Create stream I0318 21:37:55.987596 6 log.go:172] (0xc0007c7ad0) (0xc0019efd60) Stream added, broadcasting: 1 I0318 21:37:55.990172 6 log.go:172] (0xc0007c7ad0) Reply frame received for 1 I0318 21:37:55.990228 6 log.go:172] (0xc0007c7ad0) (0xc0028b2140) Create stream I0318 21:37:55.990246 6 log.go:172] (0xc0007c7ad0) (0xc0028b2140) Stream added, broadcasting: 3 I0318 21:37:55.991386 6 log.go:172] (0xc0007c7ad0) Reply frame received for 3 I0318 21:37:55.991461 6 log.go:172] (0xc0007c7ad0) (0xc001fc6d20) Create stream I0318 21:37:55.991497 6 log.go:172] (0xc0007c7ad0) (0xc001fc6d20) Stream added, broadcasting: 5 I0318 21:37:55.992833 6 log.go:172] (0xc0007c7ad0) Reply frame received for 5 I0318 21:37:56.057781 6 log.go:172] (0xc0007c7ad0) Data frame received for 5 I0318 21:37:56.057831 6 log.go:172] (0xc001fc6d20) (5) Data frame handling I0318 21:37:56.057868 6 log.go:172] (0xc0007c7ad0) Data frame received for 3 I0318 21:37:56.057889 6 log.go:172] (0xc0028b2140) (3) Data frame handling I0318 21:37:56.057928 6 log.go:172] (0xc0028b2140) (3) Data frame sent I0318 21:37:56.058019 6 log.go:172] (0xc0007c7ad0) Data frame received for 3 I0318 21:37:56.058047 6 log.go:172] (0xc0028b2140) (3) Data frame handling I0318 21:37:56.059735 6 log.go:172] (0xc0007c7ad0) Data frame received for 1 I0318 21:37:56.059757 6 log.go:172] (0xc0019efd60) (1) Data frame handling I0318 21:37:56.059771 6 log.go:172] (0xc0019efd60) (1) Data frame sent I0318 21:37:56.059807 6 log.go:172] (0xc0007c7ad0) (0xc0019efd60) Stream removed, broadcasting: 1 I0318 21:37:56.059834 6 log.go:172] (0xc0007c7ad0) Go away received I0318 21:37:56.059920 6 log.go:172] (0xc0007c7ad0) (0xc0019efd60) Stream removed, broadcasting: 1 I0318 21:37:56.060048 6 log.go:172] (0xc0007c7ad0) (0xc0028b2140) Stream removed, broadcasting: 3 I0318 21:37:56.060054 6 log.go:172] (0xc0007c7ad0) (0xc001fc6d20) Stream removed, broadcasting: 5 Mar 18 21:37:56.060: INFO: Exec stderr: "" Mar 18 21:37:56.060: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:56.060: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:56.101204 6 log.go:172] (0xc002193a20) (0xc001fc7040) Create stream I0318 21:37:56.101245 6 log.go:172] (0xc002193a20) (0xc001fc7040) Stream added, broadcasting: 1 I0318 21:37:56.104229 6 log.go:172] (0xc002193a20) Reply frame received for 1 I0318 21:37:56.104281 6 log.go:172] (0xc002193a20) (0xc001fc7180) Create stream I0318 21:37:56.104296 6 log.go:172] (0xc002193a20) (0xc001fc7180) Stream added, broadcasting: 3 I0318 21:37:56.105603 6 log.go:172] (0xc002193a20) Reply frame received for 3 I0318 21:37:56.105636 6 log.go:172] (0xc002193a20) (0xc001008640) Create stream I0318 21:37:56.105646 6 log.go:172] (0xc002193a20) (0xc001008640) Stream added, broadcasting: 5 I0318 21:37:56.106474 6 log.go:172] (0xc002193a20) Reply frame received for 5 I0318 21:37:56.163480 6 log.go:172] (0xc002193a20) Data frame received for 3 I0318 21:37:56.163514 6 log.go:172] (0xc001fc7180) (3) Data frame handling I0318 21:37:56.163521 6 log.go:172] (0xc001fc7180) (3) Data frame sent I0318 21:37:56.163526 6 log.go:172] (0xc002193a20) Data frame received for 3 I0318 21:37:56.163530 6 log.go:172] (0xc001fc7180) (3) Data frame handling I0318 21:37:56.163540 6 log.go:172] (0xc002193a20) Data frame received for 5 I0318 21:37:56.163553 6 log.go:172] (0xc001008640) (5) Data frame handling I0318 21:37:56.164954 6 log.go:172] (0xc002193a20) Data frame received for 1 I0318 21:37:56.164969 6 log.go:172] (0xc001fc7040) (1) Data frame handling I0318 21:37:56.164979 6 log.go:172] (0xc001fc7040) (1) Data frame sent I0318 21:37:56.164992 6 log.go:172] (0xc002193a20) (0xc001fc7040) Stream removed, broadcasting: 1 I0318 21:37:56.165048 6 log.go:172] (0xc002193a20) (0xc001fc7040) Stream removed, broadcasting: 1 I0318 21:37:56.165060 6 log.go:172] (0xc002193a20) (0xc001fc7180) Stream removed, broadcasting: 3 I0318 21:37:56.165065 6 log.go:172] (0xc002193a20) (0xc001008640) Stream removed, broadcasting: 5 Mar 18 21:37:56.165: INFO: Exec stderr: "" Mar 18 21:37:56.165: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:56.165: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:56.165260 6 log.go:172] (0xc002193a20) Go away received I0318 21:37:56.197710 6 log.go:172] (0xc002a14000) (0xc001fc74a0) Create stream I0318 21:37:56.197766 6 log.go:172] (0xc002a14000) (0xc001fc74a0) Stream added, broadcasting: 1 I0318 21:37:56.201682 6 log.go:172] (0xc002a14000) Reply frame received for 1 I0318 21:37:56.201737 6 log.go:172] (0xc002a14000) (0xc001fc7540) Create stream I0318 21:37:56.201759 6 log.go:172] (0xc002a14000) (0xc001fc7540) Stream added, broadcasting: 3 I0318 21:37:56.202670 6 log.go:172] (0xc002a14000) Reply frame received for 3 I0318 21:37:56.202775 6 log.go:172] (0xc002a14000) (0xc002358140) Create stream I0318 21:37:56.202823 6 log.go:172] (0xc002a14000) (0xc002358140) Stream added, broadcasting: 5 I0318 21:37:56.204004 6 log.go:172] (0xc002a14000) Reply frame received for 5 I0318 21:37:56.267259 6 log.go:172] (0xc002a14000) Data frame received for 5 I0318 21:37:56.267298 6 log.go:172] (0xc002358140) (5) Data frame handling I0318 21:37:56.267323 6 log.go:172] (0xc002a14000) Data frame received for 3 I0318 21:37:56.267336 6 log.go:172] (0xc001fc7540) (3) Data frame handling I0318 21:37:56.267347 6 log.go:172] (0xc001fc7540) (3) Data frame sent I0318 21:37:56.267360 6 log.go:172] (0xc002a14000) Data frame received for 3 I0318 21:37:56.267372 6 log.go:172] (0xc001fc7540) (3) Data frame handling I0318 21:37:56.268844 6 log.go:172] (0xc002a14000) Data frame received for 1 I0318 21:37:56.268876 6 log.go:172] (0xc001fc74a0) (1) Data frame handling I0318 21:37:56.268910 6 log.go:172] (0xc001fc74a0) (1) Data frame sent I0318 21:37:56.268938 6 log.go:172] (0xc002a14000) (0xc001fc74a0) Stream removed, broadcasting: 1 I0318 21:37:56.268967 6 log.go:172] (0xc002a14000) Go away received I0318 21:37:56.269101 6 log.go:172] (0xc002a14000) (0xc001fc74a0) Stream removed, broadcasting: 1 I0318 21:37:56.269290 6 log.go:172] (0xc002a14000) (0xc001fc7540) Stream removed, broadcasting: 3 I0318 21:37:56.269321 6 log.go:172] (0xc002a14000) (0xc002358140) Stream removed, broadcasting: 5 Mar 18 21:37:56.269: INFO: Exec stderr: "" Mar 18 21:37:56.269: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9081 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:37:56.269: INFO: >>> kubeConfig: /root/.kube/config I0318 21:37:56.305679 6 log.go:172] (0xc004abc790) (0xc0028b2640) Create stream I0318 21:37:56.305707 6 log.go:172] (0xc004abc790) (0xc0028b2640) Stream added, broadcasting: 1 I0318 21:37:56.307988 6 log.go:172] (0xc004abc790) Reply frame received for 1 I0318 21:37:56.308042 6 log.go:172] (0xc004abc790) (0xc0010086e0) Create stream I0318 21:37:56.308058 6 log.go:172] (0xc004abc790) (0xc0010086e0) Stream added, broadcasting: 3 I0318 21:37:56.309406 6 log.go:172] (0xc004abc790) Reply frame received for 3 I0318 21:37:56.309448 6 log.go:172] (0xc004abc790) (0xc002358280) Create stream I0318 21:37:56.309463 6 log.go:172] (0xc004abc790) (0xc002358280) Stream added, broadcasting: 5 I0318 21:37:56.310366 6 log.go:172] (0xc004abc790) Reply frame received for 5 I0318 21:37:56.380967 6 log.go:172] (0xc004abc790) Data frame received for 5 I0318 21:37:56.381108 6 log.go:172] (0xc002358280) (5) Data frame handling I0318 21:37:56.381322 6 log.go:172] (0xc004abc790) Data frame received for 3 I0318 21:37:56.381345 6 log.go:172] (0xc0010086e0) (3) Data frame handling I0318 21:37:56.381369 6 log.go:172] (0xc0010086e0) (3) Data frame sent I0318 21:37:56.381390 6 log.go:172] (0xc004abc790) Data frame received for 3 I0318 21:37:56.381409 6 log.go:172] (0xc0010086e0) (3) Data frame handling I0318 21:37:56.383378 6 log.go:172] (0xc004abc790) Data frame received for 1 I0318 21:37:56.383403 6 log.go:172] (0xc0028b2640) (1) Data frame handling I0318 21:37:56.383418 6 log.go:172] (0xc0028b2640) (1) Data frame sent I0318 21:37:56.384130 6 log.go:172] (0xc004abc790) (0xc0028b2640) Stream removed, broadcasting: 1 I0318 21:37:56.384256 6 log.go:172] (0xc004abc790) (0xc0028b2640) Stream removed, broadcasting: 1 I0318 21:37:56.384292 6 log.go:172] (0xc004abc790) (0xc0010086e0) Stream removed, broadcasting: 3 I0318 21:37:56.384327 6 log.go:172] (0xc004abc790) (0xc002358280) Stream removed, broadcasting: 5 Mar 18 21:37:56.384: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:37:56.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0318 21:37:56.385220 6 log.go:172] (0xc004abc790) Go away received STEP: Destroying namespace "e2e-kubelet-etc-hosts-9081" for this suite. • [SLOW TEST:11.223 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1993,"failed":0} S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:37:56.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8903, will wait for the garbage collector to delete the pods Mar 18 21:38:00.538: INFO: Deleting Job.batch foo took: 5.667376ms Mar 18 21:38:00.838: INFO: Terminating Job.batch foo pods took: 300.212892ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:38:34.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8903" for this suite. • [SLOW TEST:37.764 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":125,"skipped":1994,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:38:34.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 18 21:38:34.219: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 21:38:34.245: INFO: Waiting for terminating namespaces to be deleted... Mar 18 21:38:34.248: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 18 21:38:34.273: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:38:34.273: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 21:38:34.273: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:38:34.273: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 21:38:34.273: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 18 21:38:34.289: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:38:34.289: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 21:38:34.289: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:38:34.289: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-19a5ca10-9539-4832-9280-7502a4910d99 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-19a5ca10-9539-4832-9280-7502a4910d99 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-19a5ca10-9539-4832-9280-7502a4910d99 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:43:42.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7084" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.297 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":126,"skipped":2001,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:43:42.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 18 21:43:47.594: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:43:48.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-782" for this suite. • [SLOW TEST:6.160 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":127,"skipped":2002,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:43:48.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f4c61dad-1e0c-4e84-90bf-2e66225b7d58 STEP: Creating a pod to test consume secrets Mar 18 21:43:48.792: INFO: Waiting up to 5m0s for pod "pod-secrets-eaf5e699-7da7-4b01-a873-df8ac70f3dbf" in namespace "secrets-7479" to be "success or failure" Mar 18 21:43:48.931: INFO: Pod "pod-secrets-eaf5e699-7da7-4b01-a873-df8ac70f3dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 139.242531ms Mar 18 21:43:50.934: INFO: Pod "pod-secrets-eaf5e699-7da7-4b01-a873-df8ac70f3dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142041587s Mar 18 21:43:52.938: INFO: Pod "pod-secrets-eaf5e699-7da7-4b01-a873-df8ac70f3dbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146388943s STEP: Saw pod success Mar 18 21:43:52.938: INFO: Pod "pod-secrets-eaf5e699-7da7-4b01-a873-df8ac70f3dbf" satisfied condition "success or failure" Mar 18 21:43:52.941: INFO: Trying to get logs from node jerma-worker pod pod-secrets-eaf5e699-7da7-4b01-a873-df8ac70f3dbf container secret-volume-test: STEP: delete the pod Mar 18 21:43:52.982: INFO: Waiting for pod pod-secrets-eaf5e699-7da7-4b01-a873-df8ac70f3dbf to disappear Mar 18 21:43:52.993: INFO: Pod pod-secrets-eaf5e699-7da7-4b01-a873-df8ac70f3dbf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:43:52.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7479" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2025,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:43:53.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-6ea981eb-dc51-4116-9246-0633418e78a8 STEP: Creating a pod to test consume configMaps Mar 18 21:43:53.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-94ed4d8c-fe4c-403c-98fd-3551e378c361" in namespace "configmap-6958" to be "success or failure" Mar 18 21:43:53.153: INFO: Pod "pod-configmaps-94ed4d8c-fe4c-403c-98fd-3551e378c361": Phase="Pending", Reason="", readiness=false. Elapsed: 49.349097ms Mar 18 21:43:55.219: INFO: Pod "pod-configmaps-94ed4d8c-fe4c-403c-98fd-3551e378c361": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115092541s Mar 18 21:43:57.222: INFO: Pod "pod-configmaps-94ed4d8c-fe4c-403c-98fd-3551e378c361": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118716686s STEP: Saw pod success Mar 18 21:43:57.222: INFO: Pod "pod-configmaps-94ed4d8c-fe4c-403c-98fd-3551e378c361" satisfied condition "success or failure" Mar 18 21:43:57.225: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-94ed4d8c-fe4c-403c-98fd-3551e378c361 container configmap-volume-test: STEP: delete the pod Mar 18 21:43:57.240: INFO: Waiting for pod pod-configmaps-94ed4d8c-fe4c-403c-98fd-3551e378c361 to disappear Mar 18 21:43:57.245: INFO: Pod pod-configmaps-94ed4d8c-fe4c-403c-98fd-3551e378c361 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:43:57.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6958" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2043,"failed":0} SSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:43:57.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:43:57.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1704" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":130,"skipped":2048,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:43:57.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 18 21:43:57.543: INFO: Waiting up to 5m0s for pod "pod-a47ee4fe-d484-490d-8c94-3415cc282277" in namespace "emptydir-5470" to be "success or failure" Mar 18 21:43:57.546: INFO: Pod "pod-a47ee4fe-d484-490d-8c94-3415cc282277": Phase="Pending", Reason="", readiness=false. Elapsed: 3.339015ms Mar 18 21:43:59.552: INFO: Pod "pod-a47ee4fe-d484-490d-8c94-3415cc282277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009299133s Mar 18 21:44:01.556: INFO: Pod "pod-a47ee4fe-d484-490d-8c94-3415cc282277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013806975s STEP: Saw pod success Mar 18 21:44:01.556: INFO: Pod "pod-a47ee4fe-d484-490d-8c94-3415cc282277" satisfied condition "success or failure" Mar 18 21:44:01.560: INFO: Trying to get logs from node jerma-worker pod pod-a47ee4fe-d484-490d-8c94-3415cc282277 container test-container: STEP: delete the pod Mar 18 21:44:01.594: INFO: Waiting for pod pod-a47ee4fe-d484-490d-8c94-3415cc282277 to disappear Mar 18 21:44:01.606: INFO: Pod pod-a47ee4fe-d484-490d-8c94-3415cc282277 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:44:01.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5470" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2051,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:44:01.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-eeec25c8-3620-471d-ad4e-76c12eba525d STEP: Creating a pod to test consume configMaps Mar 18 21:44:01.720: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a68161a8-a9d3-493b-96a3-f22eb7db0716" in namespace "projected-5065" to be "success or failure" Mar 18 21:44:01.741: INFO: Pod "pod-projected-configmaps-a68161a8-a9d3-493b-96a3-f22eb7db0716": Phase="Pending", Reason="", readiness=false. Elapsed: 21.276507ms Mar 18 21:44:03.745: INFO: Pod "pod-projected-configmaps-a68161a8-a9d3-493b-96a3-f22eb7db0716": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025330968s Mar 18 21:44:05.750: INFO: Pod "pod-projected-configmaps-a68161a8-a9d3-493b-96a3-f22eb7db0716": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030037216s STEP: Saw pod success Mar 18 21:44:05.750: INFO: Pod "pod-projected-configmaps-a68161a8-a9d3-493b-96a3-f22eb7db0716" satisfied condition "success or failure" Mar 18 21:44:05.753: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a68161a8-a9d3-493b-96a3-f22eb7db0716 container projected-configmap-volume-test: STEP: delete the pod Mar 18 21:44:05.802: INFO: Waiting for pod pod-projected-configmaps-a68161a8-a9d3-493b-96a3-f22eb7db0716 to disappear Mar 18 21:44:05.832: INFO: Pod pod-projected-configmaps-a68161a8-a9d3-493b-96a3-f22eb7db0716 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:44:05.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5065" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2068,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:44:05.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-5ca1ca34-21f2-48b3-a66f-79a088c4fdf4 STEP: Creating a pod to test consume configMaps Mar 18 21:44:05.920: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e79d3399-3f4f-4f91-934e-d8d5a55dffd3" in namespace "projected-680" to be "success or failure" Mar 18 21:44:05.923: INFO: Pod "pod-projected-configmaps-e79d3399-3f4f-4f91-934e-d8d5a55dffd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.873844ms Mar 18 21:44:07.926: INFO: Pod "pod-projected-configmaps-e79d3399-3f4f-4f91-934e-d8d5a55dffd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006498502s Mar 18 21:44:09.935: INFO: Pod "pod-projected-configmaps-e79d3399-3f4f-4f91-934e-d8d5a55dffd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01479571s STEP: Saw pod success Mar 18 21:44:09.935: INFO: Pod "pod-projected-configmaps-e79d3399-3f4f-4f91-934e-d8d5a55dffd3" satisfied condition "success or failure" Mar 18 21:44:09.938: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e79d3399-3f4f-4f91-934e-d8d5a55dffd3 container projected-configmap-volume-test: STEP: delete the pod Mar 18 21:44:09.953: INFO: Waiting for pod pod-projected-configmaps-e79d3399-3f4f-4f91-934e-d8d5a55dffd3 to disappear Mar 18 21:44:09.958: INFO: Pod pod-projected-configmaps-e79d3399-3f4f-4f91-934e-d8d5a55dffd3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:44:09.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-680" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:44:09.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:44:16.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9148" for this suite. STEP: Destroying namespace "nsdeletetest-0" for this suite. Mar 18 21:44:16.210: INFO: Namespace nsdeletetest-0 was already deleted STEP: Destroying namespace "nsdeletetest-1520" for this suite. • [SLOW TEST:6.248 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":134,"skipped":2154,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:44:16.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 18 21:44:16.278: INFO: Waiting up to 5m0s for pod "pod-10809c05-aec8-44a5-9338-04f645793e13" in namespace "emptydir-3960" to be "success or failure" Mar 18 21:44:16.282: INFO: Pod "pod-10809c05-aec8-44a5-9338-04f645793e13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154618ms Mar 18 21:44:18.287: INFO: Pod "pod-10809c05-aec8-44a5-9338-04f645793e13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008552648s Mar 18 21:44:20.291: INFO: Pod "pod-10809c05-aec8-44a5-9338-04f645793e13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012704664s STEP: Saw pod success Mar 18 21:44:20.291: INFO: Pod "pod-10809c05-aec8-44a5-9338-04f645793e13" satisfied condition "success or failure" Mar 18 21:44:20.294: INFO: Trying to get logs from node jerma-worker pod pod-10809c05-aec8-44a5-9338-04f645793e13 container test-container: STEP: delete the pod Mar 18 21:44:20.327: INFO: Waiting for pod pod-10809c05-aec8-44a5-9338-04f645793e13 to disappear Mar 18 21:44:20.342: INFO: Pod pod-10809c05-aec8-44a5-9338-04f645793e13 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:44:20.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3960" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2155,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:44:20.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7774 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 21:44:20.380: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 21:44:46.490: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.24 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7774 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:44:46.490: INFO: >>> kubeConfig: /root/.kube/config I0318 21:44:46.521458 6 log.go:172] (0xc001fd84d0) (0xc0012b8a00) Create stream I0318 21:44:46.521499 6 log.go:172] (0xc001fd84d0) (0xc0012b8a00) Stream added, broadcasting: 1 I0318 21:44:46.523544 6 log.go:172] (0xc001fd84d0) Reply frame received for 1 I0318 21:44:46.523575 6 log.go:172] (0xc001fd84d0) (0xc0028b3f40) Create stream I0318 21:44:46.523587 6 log.go:172] (0xc001fd84d0) (0xc0028b3f40) Stream added, broadcasting: 3 I0318 21:44:46.524475 6 log.go:172] (0xc001fd84d0) Reply frame received for 3 I0318 21:44:46.524500 6 log.go:172] (0xc001fd84d0) (0xc001424140) Create stream I0318 21:44:46.524510 6 log.go:172] (0xc001fd84d0) (0xc001424140) Stream added, broadcasting: 5 I0318 21:44:46.525506 6 log.go:172] (0xc001fd84d0) Reply frame received for 5 I0318 21:44:47.612935 6 log.go:172] (0xc001fd84d0) Data frame received for 3 I0318 21:44:47.612988 6 log.go:172] (0xc001fd84d0) Data frame received for 5 I0318 21:44:47.613375 6 log.go:172] (0xc001424140) (5) Data frame handling I0318 21:44:47.613464 6 log.go:172] (0xc0028b3f40) (3) Data frame handling I0318 21:44:47.613495 6 log.go:172] (0xc0028b3f40) (3) Data frame sent I0318 21:44:47.613521 6 log.go:172] (0xc001fd84d0) Data frame received for 3 I0318 21:44:47.613537 6 log.go:172] (0xc0028b3f40) (3) Data frame handling I0318 21:44:47.615402 6 log.go:172] (0xc001fd84d0) Data frame received for 1 I0318 21:44:47.615430 6 log.go:172] (0xc0012b8a00) (1) Data frame handling I0318 21:44:47.615450 6 log.go:172] (0xc0012b8a00) (1) Data frame sent I0318 21:44:47.615469 6 log.go:172] (0xc001fd84d0) (0xc0012b8a00) Stream removed, broadcasting: 1 I0318 21:44:47.615491 6 log.go:172] (0xc001fd84d0) Go away received I0318 21:44:47.615622 6 log.go:172] (0xc001fd84d0) (0xc0012b8a00) Stream removed, broadcasting: 1 I0318 21:44:47.615661 6 log.go:172] (0xc001fd84d0) (0xc0028b3f40) Stream removed, broadcasting: 3 I0318 21:44:47.615677 6 log.go:172] (0xc001fd84d0) (0xc001424140) Stream removed, broadcasting: 5 Mar 18 21:44:47.615: INFO: Found all expected endpoints: [netserver-0] Mar 18 21:44:47.619: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.50 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7774 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:44:47.619: INFO: >>> kubeConfig: /root/.kube/config I0318 21:44:47.655507 6 log.go:172] (0xc002193c30) (0xc0017d72c0) Create stream I0318 21:44:47.655539 6 log.go:172] (0xc002193c30) (0xc0017d72c0) Stream added, broadcasting: 1 I0318 21:44:47.658105 6 log.go:172] (0xc002193c30) Reply frame received for 1 I0318 21:44:47.658127 6 log.go:172] (0xc002193c30) (0xc001f32140) Create stream I0318 21:44:47.658134 6 log.go:172] (0xc002193c30) (0xc001f32140) Stream added, broadcasting: 3 I0318 21:44:47.659012 6 log.go:172] (0xc002193c30) Reply frame received for 3 I0318 21:44:47.659049 6 log.go:172] (0xc002193c30) (0xc0012b8be0) Create stream I0318 21:44:47.659064 6 log.go:172] (0xc002193c30) (0xc0012b8be0) Stream added, broadcasting: 5 I0318 21:44:47.660074 6 log.go:172] (0xc002193c30) Reply frame received for 5 I0318 21:44:48.724444 6 log.go:172] (0xc002193c30) Data frame received for 3 I0318 21:44:48.724495 6 log.go:172] (0xc001f32140) (3) Data frame handling I0318 21:44:48.724522 6 log.go:172] (0xc001f32140) (3) Data frame sent I0318 21:44:48.724536 6 log.go:172] (0xc002193c30) Data frame received for 3 I0318 21:44:48.724564 6 log.go:172] (0xc001f32140) (3) Data frame handling I0318 21:44:48.724838 6 log.go:172] (0xc002193c30) Data frame received for 5 I0318 21:44:48.724890 6 log.go:172] (0xc0012b8be0) (5) Data frame handling I0318 21:44:48.726998 6 log.go:172] (0xc002193c30) Data frame received for 1 I0318 21:44:48.727025 6 log.go:172] (0xc0017d72c0) (1) Data frame handling I0318 21:44:48.727038 6 log.go:172] (0xc0017d72c0) (1) Data frame sent I0318 21:44:48.727069 6 log.go:172] (0xc002193c30) (0xc0017d72c0) Stream removed, broadcasting: 1 I0318 21:44:48.727200 6 log.go:172] (0xc002193c30) Go away received I0318 21:44:48.727292 6 log.go:172] (0xc002193c30) (0xc0017d72c0) Stream removed, broadcasting: 1 I0318 21:44:48.727321 6 log.go:172] (0xc002193c30) (0xc001f32140) Stream removed, broadcasting: 3 I0318 21:44:48.727344 6 log.go:172] (0xc002193c30) (0xc0012b8be0) Stream removed, broadcasting: 5 Mar 18 21:44:48.727: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:44:48.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7774" for this suite. • [SLOW TEST:28.386 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2164,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:44:48.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:44:59.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8502" for this suite. • [SLOW TEST:11.169 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":137,"skipped":2183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:44:59.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:44:59.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 18 21:45:00.584: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-18T21:45:00Z generation:1 name:name1 resourceVersion:856680 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9dc96fa0-4c6f-4acf-9fd7-a6d077e02c2a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 18 21:45:10.590: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-18T21:45:10Z generation:1 name:name2 resourceVersion:856715 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2d196922-786e-4a4a-9a32-ed01603bcd33] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 18 21:45:20.596: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-18T21:45:00Z generation:2 name:name1 resourceVersion:856745 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9dc96fa0-4c6f-4acf-9fd7-a6d077e02c2a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 18 21:45:30.601: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-18T21:45:10Z generation:2 name:name2 resourceVersion:856775 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2d196922-786e-4a4a-9a32-ed01603bcd33] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 18 21:45:40.615: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-18T21:45:00Z generation:2 name:name1 resourceVersion:856805 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9dc96fa0-4c6f-4acf-9fd7-a6d077e02c2a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 18 21:45:50.623: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-18T21:45:10Z generation:2 name:name2 resourceVersion:856835 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2d196922-786e-4a4a-9a32-ed01603bcd33] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:46:01.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8239" for this suite. • [SLOW TEST:61.235 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":138,"skipped":2213,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:46:01.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:46:01.717: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:46:03.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164761, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164761, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164761, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164761, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:46:06.755: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:46:16.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7042" for this suite. STEP: Destroying namespace "webhook-7042-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.881 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":139,"skipped":2219,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:46:17.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 18 21:46:17.076: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-568" to be "success or failure" Mar 18 21:46:17.087: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.689238ms Mar 18 21:46:19.091: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014981335s Mar 18 21:46:21.149: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073308594s Mar 18 21:46:23.153: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076973527s STEP: Saw pod success Mar 18 21:46:23.153: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 18 21:46:23.155: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 18 21:46:23.184: INFO: Waiting for pod pod-host-path-test to disappear Mar 18 21:46:23.188: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:46:23.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-568" for this suite. • [SLOW TEST:6.173 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2237,"failed":0} SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:46:23.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-c2b3f215-43c6-45c9-8b3a-f3ece0b7e17f STEP: Creating secret with name s-test-opt-upd-b6a18701-8942-495c-9ba9-23837eb6ee93 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c2b3f215-43c6-45c9-8b3a-f3ece0b7e17f STEP: Updating secret s-test-opt-upd-b6a18701-8942-495c-9ba9-23837eb6ee93 STEP: Creating secret with name s-test-opt-create-a3e5a971-8186-4330-8e2d-3a15f7455814 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:46:33.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8023" for this suite. • [SLOW TEST:10.218 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:46:33.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 18 21:46:34.030: INFO: created pod pod-service-account-defaultsa Mar 18 21:46:34.031: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 18 21:46:34.036: INFO: created pod pod-service-account-mountsa Mar 18 21:46:34.036: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 18 21:46:34.059: INFO: created pod pod-service-account-nomountsa Mar 18 21:46:34.059: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 18 21:46:34.095: INFO: created pod pod-service-account-defaultsa-mountspec Mar 18 21:46:34.095: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 18 21:46:34.125: INFO: created pod pod-service-account-mountsa-mountspec Mar 18 21:46:34.125: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 18 21:46:34.139: INFO: created pod pod-service-account-nomountsa-mountspec Mar 18 21:46:34.139: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 18 21:46:34.158: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 18 21:46:34.158: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 18 21:46:34.210: INFO: created pod pod-service-account-mountsa-nomountspec Mar 18 21:46:34.210: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 18 21:46:34.238: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 18 21:46:34.238: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:46:34.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8375" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":142,"skipped":2269,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:46:34.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 18 21:46:34.529: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:46:49.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1542" for this suite. • [SLOW TEST:15.490 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":143,"skipped":2278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:46:49.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 18 21:46:49.922: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-label-changed ac7b612f-f0c1-433a-bb94-ce4a74b4b243 857282 0 2020-03-18 21:46:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 21:46:49.922: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-label-changed ac7b612f-f0c1-433a-bb94-ce4a74b4b243 857283 0 2020-03-18 21:46:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 18 21:46:49.922: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-label-changed ac7b612f-f0c1-433a-bb94-ce4a74b4b243 857284 0 2020-03-18 21:46:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 18 21:46:59.953: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-label-changed ac7b612f-f0c1-433a-bb94-ce4a74b4b243 857321 0 2020-03-18 21:46:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 21:46:59.953: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-label-changed ac7b612f-f0c1-433a-bb94-ce4a74b4b243 857322 0 2020-03-18 21:46:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 18 21:46:59.954: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-label-changed ac7b612f-f0c1-433a-bb94-ce4a74b4b243 857323 0 2020-03-18 21:46:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:46:59.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3945" for this suite. • [SLOW TEST:10.119 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":144,"skipped":2316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:46:59.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:47:16.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-914" for this suite. • [SLOW TEST:16.213 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":145,"skipped":2342,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:47:16.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5016/configmap-test-68e9c5d2-d17a-4dba-acf0-96911c2342e5 STEP: Creating a pod to test consume configMaps Mar 18 21:47:16.280: INFO: Waiting up to 5m0s for pod "pod-configmaps-658fbb5d-964d-4e52-97ce-af7e9fd86dfe" in namespace "configmap-5016" to be "success or failure" Mar 18 21:47:16.295: INFO: Pod "pod-configmaps-658fbb5d-964d-4e52-97ce-af7e9fd86dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 14.71565ms Mar 18 21:47:18.299: INFO: Pod "pod-configmaps-658fbb5d-964d-4e52-97ce-af7e9fd86dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018809007s Mar 18 21:47:20.303: INFO: Pod "pod-configmaps-658fbb5d-964d-4e52-97ce-af7e9fd86dfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022944239s STEP: Saw pod success Mar 18 21:47:20.303: INFO: Pod "pod-configmaps-658fbb5d-964d-4e52-97ce-af7e9fd86dfe" satisfied condition "success or failure" Mar 18 21:47:20.306: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-658fbb5d-964d-4e52-97ce-af7e9fd86dfe container env-test: STEP: delete the pod Mar 18 21:47:20.326: INFO: Waiting for pod pod-configmaps-658fbb5d-964d-4e52-97ce-af7e9fd86dfe to disappear Mar 18 21:47:20.330: INFO: Pod pod-configmaps-658fbb5d-964d-4e52-97ce-af7e9fd86dfe no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:47:20.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5016" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2348,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:47:20.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 18 21:47:23.520: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:47:23.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6799" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2351,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:47:23.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0571a984-4485-4685-9d5c-f4df1b53a304 STEP: Creating a pod to test consume secrets Mar 18 21:47:23.713: INFO: Waiting up to 5m0s for pod "pod-secrets-4ac96a1d-10e9-40ec-b240-efc545579593" in namespace "secrets-2442" to be "success or failure" Mar 18 21:47:23.723: INFO: Pod "pod-secrets-4ac96a1d-10e9-40ec-b240-efc545579593": Phase="Pending", Reason="", readiness=false. Elapsed: 9.705536ms Mar 18 21:47:25.755: INFO: Pod "pod-secrets-4ac96a1d-10e9-40ec-b240-efc545579593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041301626s Mar 18 21:47:27.759: INFO: Pod "pod-secrets-4ac96a1d-10e9-40ec-b240-efc545579593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045581238s STEP: Saw pod success Mar 18 21:47:27.759: INFO: Pod "pod-secrets-4ac96a1d-10e9-40ec-b240-efc545579593" satisfied condition "success or failure" Mar 18 21:47:27.762: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-4ac96a1d-10e9-40ec-b240-efc545579593 container secret-volume-test: STEP: delete the pod Mar 18 21:47:27.795: INFO: Waiting for pod pod-secrets-4ac96a1d-10e9-40ec-b240-efc545579593 to disappear Mar 18 21:47:27.807: INFO: Pod pod-secrets-4ac96a1d-10e9-40ec-b240-efc545579593 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:47:27.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2442" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:47:27.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 18 21:47:27.860: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix490309914/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:47:27.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1722" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":149,"skipped":2412,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:47:27.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:47:28.289: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:47:30.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164848, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164848, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164848, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164848, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:47:33.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 18 21:47:37.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-6210 to-be-attached-pod -i -c=container1' Mar 18 21:47:40.369: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:47:40.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6210" for this suite. STEP: Destroying namespace "webhook-6210-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.515 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":150,"skipped":2415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:47:40.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 18 21:47:48.563: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 21:47:48.566: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 21:47:50.566: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 21:47:50.570: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 21:47:52.566: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 21:47:52.570: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:47:52.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4333" for this suite. • [SLOW TEST:12.127 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2450,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:47:52.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:47:52.618: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 18 21:47:54.660: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:47:55.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-486" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":152,"skipped":2452,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:47:55.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:47:56.704: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:47:58.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164876, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164876, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164876, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164876, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 21:48:00.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164876, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164876, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164876, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720164876, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:48:03.773: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:48:03.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8471" for this suite. STEP: Destroying namespace "webhook-8471-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.301 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":153,"skipped":2454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:48:03.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 18 21:48:04.036: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 21:48:04.080: INFO: Waiting for terminating namespaces to be deleted... Mar 18 21:48:04.082: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 18 21:48:04.098: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:48:04.098: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 21:48:04.098: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:48:04.098: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 21:48:04.098: INFO: sample-webhook-deployment-5f65f8c764-x8vxs from webhook-8471 started at 2020-03-18 21:47:56 +0000 UTC (1 container statuses recorded) Mar 18 21:48:04.098: INFO: Container sample-webhook ready: true, restart count 0 Mar 18 21:48:04.098: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 18 21:48:04.103: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:48:04.103: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 21:48:04.103: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 21:48:04.103: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fd8446f0725911], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:48:05.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1854" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":154,"skipped":2513,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:48:05.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 18 21:48:05.197: INFO: Waiting up to 5m0s for pod "pod-b0e0ae5a-d591-4bfe-aaa9-097e1ae56d48" in namespace "emptydir-1358" to be "success or failure" Mar 18 21:48:05.210: INFO: Pod "pod-b0e0ae5a-d591-4bfe-aaa9-097e1ae56d48": Phase="Pending", Reason="", readiness=false. Elapsed: 12.723881ms Mar 18 21:48:07.214: INFO: Pod "pod-b0e0ae5a-d591-4bfe-aaa9-097e1ae56d48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016327819s Mar 18 21:48:09.217: INFO: Pod "pod-b0e0ae5a-d591-4bfe-aaa9-097e1ae56d48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020031234s STEP: Saw pod success Mar 18 21:48:09.217: INFO: Pod "pod-b0e0ae5a-d591-4bfe-aaa9-097e1ae56d48" satisfied condition "success or failure" Mar 18 21:48:09.220: INFO: Trying to get logs from node jerma-worker pod pod-b0e0ae5a-d591-4bfe-aaa9-097e1ae56d48 container test-container: STEP: delete the pod Mar 18 21:48:09.271: INFO: Waiting for pod pod-b0e0ae5a-d591-4bfe-aaa9-097e1ae56d48 to disappear Mar 18 21:48:09.284: INFO: Pod pod-b0e0ae5a-d591-4bfe-aaa9-097e1ae56d48 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:48:09.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1358" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2526,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:48:09.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:48:09.360: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e6e8e91e-be4f-4277-994c-bbf7c0ef8832" in namespace "security-context-test-8028" to be "success or failure" Mar 18 21:48:09.363: INFO: Pod "busybox-privileged-false-e6e8e91e-be4f-4277-994c-bbf7c0ef8832": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07802ms Mar 18 21:48:11.378: INFO: Pod "busybox-privileged-false-e6e8e91e-be4f-4277-994c-bbf7c0ef8832": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017928796s Mar 18 21:48:13.382: INFO: Pod "busybox-privileged-false-e6e8e91e-be4f-4277-994c-bbf7c0ef8832": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021967244s Mar 18 21:48:13.382: INFO: Pod "busybox-privileged-false-e6e8e91e-be4f-4277-994c-bbf7c0ef8832" satisfied condition "success or failure" Mar 18 21:48:13.389: INFO: Got logs for pod "busybox-privileged-false-e6e8e91e-be4f-4277-994c-bbf7c0ef8832": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:48:13.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8028" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2535,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:48:13.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 18 21:48:13.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1165' Mar 18 21:48:13.543: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 21:48:13.543: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 18 21:48:13.559: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 18 21:48:13.587: INFO: scanned /root for discovery docs: Mar 18 21:48:13.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1165' Mar 18 21:48:29.433: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 18 21:48:29.433: INFO: stdout: "Created e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1\nScaling up e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 18 21:48:29.433: INFO: stdout: "Created e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1\nScaling up e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 18 21:48:29.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1165' Mar 18 21:48:29.528: INFO: stderr: "" Mar 18 21:48:29.528: INFO: stdout: "e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1-ftn67 " Mar 18 21:48:29.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1-ftn67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1165' Mar 18 21:48:29.630: INFO: stderr: "" Mar 18 21:48:29.630: INFO: stdout: "true" Mar 18 21:48:29.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1-ftn67 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1165' Mar 18 21:48:29.735: INFO: stderr: "" Mar 18 21:48:29.735: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 18 21:48:29.735: INFO: e2e-test-httpd-rc-a75f07d4022ac0ba21eeed5aae5beac1-ftn67 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Mar 18 21:48:29.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1165' Mar 18 21:48:29.827: INFO: stderr: "" Mar 18 21:48:29.827: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:48:29.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1165" for this suite. • [SLOW TEST:16.435 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":157,"skipped":2541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:48:29.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:48:29.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b678c15-393e-4eba-ad7c-a60a47023302" in namespace "downward-api-5620" to be "success or failure" Mar 18 21:48:29.932: INFO: Pod "downwardapi-volume-4b678c15-393e-4eba-ad7c-a60a47023302": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149995ms Mar 18 21:48:31.944: INFO: Pod "downwardapi-volume-4b678c15-393e-4eba-ad7c-a60a47023302": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020431272s Mar 18 21:48:33.948: INFO: Pod "downwardapi-volume-4b678c15-393e-4eba-ad7c-a60a47023302": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024174532s STEP: Saw pod success Mar 18 21:48:33.948: INFO: Pod "downwardapi-volume-4b678c15-393e-4eba-ad7c-a60a47023302" satisfied condition "success or failure" Mar 18 21:48:33.952: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4b678c15-393e-4eba-ad7c-a60a47023302 container client-container: STEP: delete the pod Mar 18 21:48:33.969: INFO: Waiting for pod downwardapi-volume-4b678c15-393e-4eba-ad7c-a60a47023302 to disappear Mar 18 21:48:33.974: INFO: Pod downwardapi-volume-4b678c15-393e-4eba-ad7c-a60a47023302 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:48:33.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5620" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:48:33.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 18 21:48:34.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3534' Mar 18 21:48:34.146: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 21:48:34.146: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Mar 18 21:48:38.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3534' Mar 18 21:48:38.285: INFO: stderr: "" Mar 18 21:48:38.285: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:48:38.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3534" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":159,"skipped":2615,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:48:38.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:48:38.367: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:48:42.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9556" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:48:42.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:49:13.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4129" for this suite. STEP: Destroying namespace "nsdeletetest-8938" for this suite. Mar 18 21:49:13.678: INFO: Namespace nsdeletetest-8938 was already deleted STEP: Destroying namespace "nsdeletetest-2579" for this suite. • [SLOW TEST:31.254 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":161,"skipped":2647,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:49:13.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6872.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6872.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6872.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6872.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6872.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6872.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6872.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6872.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6872.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6872.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 11.7.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.7.11_udp@PTR;check="$$(dig +tcp +noall +answer +search 11.7.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.7.11_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6872.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6872.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6872.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6872.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6872.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6872.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6872.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6872.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6872.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6872.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6872.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 11.7.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.7.11_udp@PTR;check="$$(dig +tcp +noall +answer +search 11.7.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.7.11_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 21:49:19.878: INFO: Unable to read wheezy_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:19.881: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:19.884: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:19.887: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:19.908: INFO: Unable to read jessie_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:19.911: INFO: Unable to read jessie_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:19.914: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:19.916: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:19.969: INFO: Lookups using dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f failed for: [wheezy_udp@dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_udp@dns-test-service.dns-6872.svc.cluster.local jessie_tcp@dns-test-service.dns-6872.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local] Mar 18 21:49:24.974: INFO: Unable to read wheezy_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:24.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:24.981: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:24.984: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:25.008: INFO: Unable to read jessie_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:25.011: INFO: Unable to read jessie_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:25.013: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:25.016: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:25.037: INFO: Lookups using dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f failed for: [wheezy_udp@dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_udp@dns-test-service.dns-6872.svc.cluster.local jessie_tcp@dns-test-service.dns-6872.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local] Mar 18 21:49:29.985: INFO: Unable to read wheezy_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:29.988: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:29.991: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:29.993: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:30.012: INFO: Unable to read jessie_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:30.015: INFO: Unable to read jessie_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:30.018: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:30.020: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:30.039: INFO: Lookups using dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f failed for: [wheezy_udp@dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_udp@dns-test-service.dns-6872.svc.cluster.local jessie_tcp@dns-test-service.dns-6872.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local] Mar 18 21:49:34.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:34.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:34.980: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:34.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:35.001: INFO: Unable to read jessie_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:35.004: INFO: Unable to read jessie_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:35.007: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:35.009: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:35.026: INFO: Lookups using dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f failed for: [wheezy_udp@dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_udp@dns-test-service.dns-6872.svc.cluster.local jessie_tcp@dns-test-service.dns-6872.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local] Mar 18 21:49:39.985: INFO: Unable to read wheezy_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:39.988: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:39.992: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:39.995: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:40.017: INFO: Unable to read jessie_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:40.020: INFO: Unable to read jessie_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:40.023: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:40.027: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:40.046: INFO: Lookups using dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f failed for: [wheezy_udp@dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_udp@dns-test-service.dns-6872.svc.cluster.local jessie_tcp@dns-test-service.dns-6872.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local] Mar 18 21:49:44.974: INFO: Unable to read wheezy_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:44.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:44.980: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:44.983: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:45.002: INFO: Unable to read jessie_udp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:45.005: INFO: Unable to read jessie_tcp@dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:45.008: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:45.010: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local from pod dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f: the server could not find the requested resource (get pods dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f) Mar 18 21:49:45.029: INFO: Lookups using dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f failed for: [wheezy_udp@dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@dns-test-service.dns-6872.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_udp@dns-test-service.dns-6872.svc.cluster.local jessie_tcp@dns-test-service.dns-6872.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6872.svc.cluster.local] Mar 18 21:49:50.041: INFO: DNS probes using dns-6872/dns-test-29a1c5fd-bcd3-4c5a-9f32-b81b86a4a21f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:49:50.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6872" for this suite. • [SLOW TEST:36.996 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":162,"skipped":2664,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:49:50.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:49:50.783: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 18 21:49:52.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2986 create -f -' Mar 18 21:49:55.595: INFO: stderr: "" Mar 18 21:49:55.595: INFO: stdout: "e2e-test-crd-publish-openapi-5287-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 18 21:49:55.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2986 delete e2e-test-crd-publish-openapi-5287-crds test-foo' Mar 18 21:49:55.723: INFO: stderr: "" Mar 18 21:49:55.723: INFO: stdout: "e2e-test-crd-publish-openapi-5287-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 18 21:49:55.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2986 apply -f -' Mar 18 21:49:55.994: INFO: stderr: "" Mar 18 21:49:55.994: INFO: stdout: "e2e-test-crd-publish-openapi-5287-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 18 21:49:55.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2986 delete e2e-test-crd-publish-openapi-5287-crds test-foo' Mar 18 21:49:56.123: INFO: stderr: "" Mar 18 21:49:56.123: INFO: stdout: "e2e-test-crd-publish-openapi-5287-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 18 21:49:56.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2986 create -f -' Mar 18 21:49:56.353: INFO: rc: 1 Mar 18 21:49:56.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2986 apply -f -' Mar 18 21:49:56.586: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 18 21:49:56.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2986 create -f -' Mar 18 21:49:56.800: INFO: rc: 1 Mar 18 21:49:56.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2986 apply -f -' Mar 18 21:49:57.018: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 18 21:49:57.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5287-crds' Mar 18 21:49:57.289: INFO: stderr: "" Mar 18 21:49:57.289: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5287-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 18 21:49:57.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5287-crds.metadata' Mar 18 21:49:57.538: INFO: stderr: "" Mar 18 21:49:57.538: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5287-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 18 21:49:57.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5287-crds.spec' Mar 18 21:49:57.772: INFO: stderr: "" Mar 18 21:49:57.772: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5287-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 18 21:49:57.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5287-crds.spec.bars' Mar 18 21:49:57.997: INFO: stderr: "" Mar 18 21:49:57.997: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5287-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 18 21:49:57.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5287-crds.spec.bars2' Mar 18 21:49:58.239: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:50:00.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2986" for this suite. • [SLOW TEST:9.441 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":163,"skipped":2673,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:50:00.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 18 21:50:00.200: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:50:16.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9700" for this suite. • [SLOW TEST:16.553 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":164,"skipped":2675,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:50:16.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 18 21:50:16.740: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:50:22.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9169" for this suite. • [SLOW TEST:5.702 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":165,"skipped":2680,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:50:22.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 18 21:50:26.989: INFO: Successfully updated pod "adopt-release-w828b" STEP: Checking that the Job readopts the Pod Mar 18 21:50:26.989: INFO: Waiting up to 15m0s for pod "adopt-release-w828b" in namespace "job-3483" to be "adopted" Mar 18 21:50:26.995: INFO: Pod "adopt-release-w828b": Phase="Running", Reason="", readiness=true. Elapsed: 5.753297ms Mar 18 21:50:29.000: INFO: Pod "adopt-release-w828b": Phase="Running", Reason="", readiness=true. Elapsed: 2.010126841s Mar 18 21:50:29.000: INFO: Pod "adopt-release-w828b" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 18 21:50:29.507: INFO: Successfully updated pod "adopt-release-w828b" STEP: Checking that the Job releases the Pod Mar 18 21:50:29.507: INFO: Waiting up to 15m0s for pod "adopt-release-w828b" in namespace "job-3483" to be "released" Mar 18 21:50:29.530: INFO: Pod "adopt-release-w828b": Phase="Running", Reason="", readiness=true. Elapsed: 23.338947ms Mar 18 21:50:31.534: INFO: Pod "adopt-release-w828b": Phase="Running", Reason="", readiness=true. Elapsed: 2.027076882s Mar 18 21:50:31.534: INFO: Pod "adopt-release-w828b" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:50:31.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3483" for this suite. • [SLOW TEST:9.167 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":166,"skipped":2687,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:50:31.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:50:31.680: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:50:38.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7968" for this suite. • [SLOW TEST:6.605 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":167,"skipped":2688,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:50:38.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 18 21:50:38.263: INFO: Waiting up to 5m0s for pod "downward-api-ae84de73-b180-4ccb-a1d8-bc2a37af6ee3" in namespace "downward-api-8822" to be "success or failure" Mar 18 21:50:38.271: INFO: Pod "downward-api-ae84de73-b180-4ccb-a1d8-bc2a37af6ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.623618ms Mar 18 21:50:40.275: INFO: Pod "downward-api-ae84de73-b180-4ccb-a1d8-bc2a37af6ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012493141s Mar 18 21:50:42.279: INFO: Pod "downward-api-ae84de73-b180-4ccb-a1d8-bc2a37af6ee3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016581389s STEP: Saw pod success Mar 18 21:50:42.279: INFO: Pod "downward-api-ae84de73-b180-4ccb-a1d8-bc2a37af6ee3" satisfied condition "success or failure" Mar 18 21:50:42.282: INFO: Trying to get logs from node jerma-worker pod downward-api-ae84de73-b180-4ccb-a1d8-bc2a37af6ee3 container dapi-container: STEP: delete the pod Mar 18 21:50:42.321: INFO: Waiting for pod downward-api-ae84de73-b180-4ccb-a1d8-bc2a37af6ee3 to disappear Mar 18 21:50:42.325: INFO: Pod downward-api-ae84de73-b180-4ccb-a1d8-bc2a37af6ee3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:50:42.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8822" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2689,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:50:42.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8812.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8812.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8812.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8812.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8812.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8812.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 21:50:48.487: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:48.491: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:48.495: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:48.498: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:48.507: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:48.510: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:48.512: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:48.515: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:48.525: INFO: Lookups using dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local] Mar 18 21:50:53.529: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:53.532: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:53.535: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:53.538: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:53.547: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:53.549: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:53.552: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:53.555: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:53.561: INFO: Lookups using dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local] Mar 18 21:50:58.530: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:58.532: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:58.535: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:58.537: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:58.547: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:58.550: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:58.553: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:58.556: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:50:58.562: INFO: Lookups using dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local] Mar 18 21:51:03.529: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:03.532: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:03.535: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:03.538: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:03.548: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:03.551: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:03.554: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:03.557: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:03.564: INFO: Lookups using dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local] Mar 18 21:51:08.529: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:08.532: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:08.535: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:08.539: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:08.549: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:08.552: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:08.554: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:08.557: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:08.563: INFO: Lookups using dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local] Mar 18 21:51:13.530: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:13.534: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:13.537: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:13.540: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:13.548: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:13.551: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:13.555: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:13.558: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local from pod dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d: the server could not find the requested resource (get pods dns-test-fab76667-4fd0-4f57-93f4-727299b1352d) Mar 18 21:51:13.564: INFO: Lookups using dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8812.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8812.svc.cluster.local jessie_udp@dns-test-service-2.dns-8812.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8812.svc.cluster.local] Mar 18 21:51:18.565: INFO: DNS probes using dns-8812/dns-test-fab76667-4fd0-4f57-93f4-727299b1352d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:51:18.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8812" for this suite. • [SLOW TEST:36.701 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":169,"skipped":2693,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:51:19.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:51:19.194: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.189081ms) Mar 18 21:51:19.198: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.874683ms) Mar 18 21:51:19.201: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.359468ms) Mar 18 21:51:19.205: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.926312ms) Mar 18 21:51:19.214: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 9.046504ms) Mar 18 21:51:19.217: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.269719ms) Mar 18 21:51:19.220: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.633359ms) Mar 18 21:51:19.222: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.416163ms) Mar 18 21:51:19.225: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.550654ms) Mar 18 21:51:19.228: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.521895ms) Mar 18 21:51:19.230: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.631981ms) Mar 18 21:51:19.233: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.729399ms) Mar 18 21:51:19.236: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.656788ms) Mar 18 21:51:19.239: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.052493ms) Mar 18 21:51:19.242: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.778696ms) Mar 18 21:51:19.245: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.311514ms) Mar 18 21:51:19.267: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 22.515398ms) Mar 18 21:51:19.272: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.268753ms) Mar 18 21:51:19.278: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 6.372288ms) Mar 18 21:51:19.282: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.788681ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:51:19.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1898" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":170,"skipped":2702,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:51:19.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-84d9907e-95b1-42f3-ac7d-78af0b0b2e20 STEP: Creating a pod to test consume secrets Mar 18 21:51:19.363: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f2e0589d-aa39-48e0-9d93-1a5598cfae65" in namespace "projected-9962" to be "success or failure" Mar 18 21:51:19.387: INFO: Pod "pod-projected-secrets-f2e0589d-aa39-48e0-9d93-1a5598cfae65": Phase="Pending", Reason="", readiness=false. Elapsed: 23.164018ms Mar 18 21:51:21.390: INFO: Pod "pod-projected-secrets-f2e0589d-aa39-48e0-9d93-1a5598cfae65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026517566s Mar 18 21:51:23.394: INFO: Pod "pod-projected-secrets-f2e0589d-aa39-48e0-9d93-1a5598cfae65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030611597s STEP: Saw pod success Mar 18 21:51:23.394: INFO: Pod "pod-projected-secrets-f2e0589d-aa39-48e0-9d93-1a5598cfae65" satisfied condition "success or failure" Mar 18 21:51:23.397: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-f2e0589d-aa39-48e0-9d93-1a5598cfae65 container secret-volume-test: STEP: delete the pod Mar 18 21:51:23.425: INFO: Waiting for pod pod-projected-secrets-f2e0589d-aa39-48e0-9d93-1a5598cfae65 to disappear Mar 18 21:51:23.430: INFO: Pod pod-projected-secrets-f2e0589d-aa39-48e0-9d93-1a5598cfae65 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:51:23.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9962" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2719,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:51:23.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:51:23.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5862974e-8f8a-4ce8-97a6-f43ef352d952" in namespace "downward-api-5030" to be "success or failure" Mar 18 21:51:23.546: INFO: Pod "downwardapi-volume-5862974e-8f8a-4ce8-97a6-f43ef352d952": Phase="Pending", Reason="", readiness=false. Elapsed: 18.670391ms Mar 18 21:51:25.550: INFO: Pod "downwardapi-volume-5862974e-8f8a-4ce8-97a6-f43ef352d952": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023225775s Mar 18 21:51:27.554: INFO: Pod "downwardapi-volume-5862974e-8f8a-4ce8-97a6-f43ef352d952": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027501601s STEP: Saw pod success Mar 18 21:51:27.555: INFO: Pod "downwardapi-volume-5862974e-8f8a-4ce8-97a6-f43ef352d952" satisfied condition "success or failure" Mar 18 21:51:27.558: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5862974e-8f8a-4ce8-97a6-f43ef352d952 container client-container: STEP: delete the pod Mar 18 21:51:27.628: INFO: Waiting for pod downwardapi-volume-5862974e-8f8a-4ce8-97a6-f43ef352d952 to disappear Mar 18 21:51:27.633: INFO: Pod downwardapi-volume-5862974e-8f8a-4ce8-97a6-f43ef352d952 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:51:27.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5030" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2719,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:51:27.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-502e182c-1242-4cb1-a60d-d2352595433b STEP: Creating a pod to test consume configMaps Mar 18 21:51:27.713: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd7cc28b-33f8-4781-b257-fd23a8aabaa6" in namespace "configmap-8153" to be "success or failure" Mar 18 21:51:27.718: INFO: Pod "pod-configmaps-fd7cc28b-33f8-4781-b257-fd23a8aabaa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.501831ms Mar 18 21:51:29.721: INFO: Pod "pod-configmaps-fd7cc28b-33f8-4781-b257-fd23a8aabaa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007520437s Mar 18 21:51:31.725: INFO: Pod "pod-configmaps-fd7cc28b-33f8-4781-b257-fd23a8aabaa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011933944s STEP: Saw pod success Mar 18 21:51:31.725: INFO: Pod "pod-configmaps-fd7cc28b-33f8-4781-b257-fd23a8aabaa6" satisfied condition "success or failure" Mar 18 21:51:31.728: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-fd7cc28b-33f8-4781-b257-fd23a8aabaa6 container configmap-volume-test: STEP: delete the pod Mar 18 21:51:31.764: INFO: Waiting for pod pod-configmaps-fd7cc28b-33f8-4781-b257-fd23a8aabaa6 to disappear Mar 18 21:51:31.775: INFO: Pod pod-configmaps-fd7cc28b-33f8-4781-b257-fd23a8aabaa6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:51:31.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8153" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2740,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:51:31.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:51:31.868: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 18 21:51:34.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1784 create -f -' Mar 18 21:51:37.721: INFO: stderr: "" Mar 18 21:51:37.721: INFO: stdout: "e2e-test-crd-publish-openapi-1737-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 18 21:51:37.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1784 delete e2e-test-crd-publish-openapi-1737-crds test-cr' Mar 18 21:51:37.841: INFO: stderr: "" Mar 18 21:51:37.841: INFO: stdout: "e2e-test-crd-publish-openapi-1737-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 18 21:51:37.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1784 apply -f -' Mar 18 21:51:38.107: INFO: stderr: "" Mar 18 21:51:38.107: INFO: stdout: "e2e-test-crd-publish-openapi-1737-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 18 21:51:38.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1784 delete e2e-test-crd-publish-openapi-1737-crds test-cr' Mar 18 21:51:38.210: INFO: stderr: "" Mar 18 21:51:38.210: INFO: stdout: "e2e-test-crd-publish-openapi-1737-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 18 21:51:38.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1737-crds' Mar 18 21:51:38.455: INFO: stderr: "" Mar 18 21:51:38.455: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1737-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:51:41.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1784" for this suite. • [SLOW TEST:9.574 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":174,"skipped":2742,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:51:41.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:51:45.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5041" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2744,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:51:45.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:51:46.071: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:51:48.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165106, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165106, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165106, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165106, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:51:51.112: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:51:51.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4665" for this suite. STEP: Destroying namespace "webhook-4665-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.348 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":176,"skipped":2754,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:51:51.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 18 21:51:51.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2277' Mar 18 21:51:51.978: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 21:51:51.978: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 18 21:51:52.009: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-9bkc7] Mar 18 21:51:52.009: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-9bkc7" in namespace "kubectl-2277" to be "running and ready" Mar 18 21:51:52.046: INFO: Pod "e2e-test-httpd-rc-9bkc7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.2657ms Mar 18 21:51:54.050: INFO: Pod "e2e-test-httpd-rc-9bkc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04113479s Mar 18 21:51:56.054: INFO: Pod "e2e-test-httpd-rc-9bkc7": Phase="Running", Reason="", readiness=true. Elapsed: 4.045166382s Mar 18 21:51:56.054: INFO: Pod "e2e-test-httpd-rc-9bkc7" satisfied condition "running and ready" Mar 18 21:51:56.054: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-9bkc7] Mar 18 21:51:56.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2277' Mar 18 21:51:56.170: INFO: stderr: "" Mar 18 21:51:56.170: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.73. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.73. Set the 'ServerName' directive globally to suppress this message\n[Wed Mar 18 21:51:54.188425 2020] [mpm_event:notice] [pid 1:tid 140227573140328] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Mar 18 21:51:54.188474 2020] [core:notice] [pid 1:tid 140227573140328] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Mar 18 21:51:56.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2277' Mar 18 21:51:56.293: INFO: stderr: "" Mar 18 21:51:56.293: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:51:56.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2277" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":177,"skipped":2756,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:51:56.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:51:56.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f0cfd89-ac97-43cb-ba88-ee6d0130e912" in namespace "projected-1386" to be "success or failure" Mar 18 21:51:56.384: INFO: Pod "downwardapi-volume-0f0cfd89-ac97-43cb-ba88-ee6d0130e912": Phase="Pending", Reason="", readiness=false. Elapsed: 24.639751ms Mar 18 21:51:58.389: INFO: Pod "downwardapi-volume-0f0cfd89-ac97-43cb-ba88-ee6d0130e912": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029089624s Mar 18 21:52:00.393: INFO: Pod "downwardapi-volume-0f0cfd89-ac97-43cb-ba88-ee6d0130e912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033417339s STEP: Saw pod success Mar 18 21:52:00.393: INFO: Pod "downwardapi-volume-0f0cfd89-ac97-43cb-ba88-ee6d0130e912" satisfied condition "success or failure" Mar 18 21:52:00.396: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0f0cfd89-ac97-43cb-ba88-ee6d0130e912 container client-container: STEP: delete the pod Mar 18 21:52:00.430: INFO: Waiting for pod downwardapi-volume-0f0cfd89-ac97-43cb-ba88-ee6d0130e912 to disappear Mar 18 21:52:00.436: INFO: Pod downwardapi-volume-0f0cfd89-ac97-43cb-ba88-ee6d0130e912 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:52:00.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1386" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2765,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:52:00.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 18 21:52:00.509: INFO: Waiting up to 5m0s for pod "downward-api-4ffbab15-8420-4708-a96d-e42e351bc06f" in namespace "downward-api-9840" to be "success or failure" Mar 18 21:52:00.520: INFO: Pod "downward-api-4ffbab15-8420-4708-a96d-e42e351bc06f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.309764ms Mar 18 21:52:02.524: INFO: Pod "downward-api-4ffbab15-8420-4708-a96d-e42e351bc06f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014316728s Mar 18 21:52:04.528: INFO: Pod "downward-api-4ffbab15-8420-4708-a96d-e42e351bc06f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018457508s STEP: Saw pod success Mar 18 21:52:04.528: INFO: Pod "downward-api-4ffbab15-8420-4708-a96d-e42e351bc06f" satisfied condition "success or failure" Mar 18 21:52:04.531: INFO: Trying to get logs from node jerma-worker2 pod downward-api-4ffbab15-8420-4708-a96d-e42e351bc06f container dapi-container: STEP: delete the pod Mar 18 21:52:04.593: INFO: Waiting for pod downward-api-4ffbab15-8420-4708-a96d-e42e351bc06f to disappear Mar 18 21:52:04.598: INFO: Pod downward-api-4ffbab15-8420-4708-a96d-e42e351bc06f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:52:04.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9840" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2776,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:52:04.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0318 21:52:45.006305 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 21:52:45.006: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:52:45.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1885" for this suite. • [SLOW TEST:40.408 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":180,"skipped":2783,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:52:45.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 18 21:52:45.099: INFO: Waiting up to 5m0s for pod "pod-9c6bed66-85c9-460b-a25c-60a7f8e5f390" in namespace "emptydir-8417" to be "success or failure" Mar 18 21:52:45.103: INFO: Pod "pod-9c6bed66-85c9-460b-a25c-60a7f8e5f390": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148564ms Mar 18 21:52:47.108: INFO: Pod "pod-9c6bed66-85c9-460b-a25c-60a7f8e5f390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008485804s Mar 18 21:52:49.112: INFO: Pod "pod-9c6bed66-85c9-460b-a25c-60a7f8e5f390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012723209s STEP: Saw pod success Mar 18 21:52:49.112: INFO: Pod "pod-9c6bed66-85c9-460b-a25c-60a7f8e5f390" satisfied condition "success or failure" Mar 18 21:52:49.115: INFO: Trying to get logs from node jerma-worker pod pod-9c6bed66-85c9-460b-a25c-60a7f8e5f390 container test-container: STEP: delete the pod Mar 18 21:52:49.174: INFO: Waiting for pod pod-9c6bed66-85c9-460b-a25c-60a7f8e5f390 to disappear Mar 18 21:52:49.185: INFO: Pod pod-9c6bed66-85c9-460b-a25c-60a7f8e5f390 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:52:49.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8417" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2803,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:52:49.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 18 21:52:49.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-481' Mar 18 21:52:49.517: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 21:52:49.517: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Mar 18 21:52:49.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-481' Mar 18 21:52:49.650: INFO: stderr: "" Mar 18 21:52:49.650: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:52:49.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-481" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":182,"skipped":2818,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:52:49.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:52:49.719: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 18 21:52:52.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8836 create -f -' Mar 18 21:52:56.941: INFO: stderr: "" Mar 18 21:52:56.941: INFO: stdout: "e2e-test-crd-publish-openapi-5022-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 18 21:52:56.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8836 delete e2e-test-crd-publish-openapi-5022-crds test-cr' Mar 18 21:52:57.054: INFO: stderr: "" Mar 18 21:52:57.054: INFO: stdout: "e2e-test-crd-publish-openapi-5022-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 18 21:52:57.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8836 apply -f -' Mar 18 21:52:57.296: INFO: stderr: "" Mar 18 21:52:57.296: INFO: stdout: "e2e-test-crd-publish-openapi-5022-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 18 21:52:57.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8836 delete e2e-test-crd-publish-openapi-5022-crds test-cr' Mar 18 21:52:57.393: INFO: stderr: "" Mar 18 21:52:57.393: INFO: stdout: "e2e-test-crd-publish-openapi-5022-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 18 21:52:57.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5022-crds' Mar 18 21:52:57.657: INFO: stderr: "" Mar 18 21:52:57.657: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5022-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:53:00.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8836" for this suite. • [SLOW TEST:10.890 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":183,"skipped":2822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:53:00.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:53:01.204: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:53:03.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165181, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165181, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165181, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165181, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:53:06.246: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:53:06.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6705" for this suite. STEP: Destroying namespace "webhook-6705-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.967 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":184,"skipped":2870,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:53:06.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:53:07.191: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:53:09.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165187, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165187, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165187, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165187, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:53:12.228: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:53:12.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5975-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:53:13.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9277" for this suite. STEP: Destroying namespace "webhook-9277-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.919 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":185,"skipped":2887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:53:13.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:53:17.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1091" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":186,"skipped":2913,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:53:17.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3320 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 21:53:17.788: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 21:53:40.150: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.83:8080/dial?request=hostname&protocol=udp&host=10.244.1.57&port=8081&tries=1'] Namespace:pod-network-test-3320 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:53:40.150: INFO: >>> kubeConfig: /root/.kube/config I0318 21:53:40.183750 6 log.go:172] (0xc006af4a50) (0xc0019ef400) Create stream I0318 21:53:40.183791 6 log.go:172] (0xc006af4a50) (0xc0019ef400) Stream added, broadcasting: 1 I0318 21:53:40.186228 6 log.go:172] (0xc006af4a50) Reply frame received for 1 I0318 21:53:40.186288 6 log.go:172] (0xc006af4a50) (0xc0013fc8c0) Create stream I0318 21:53:40.186309 6 log.go:172] (0xc006af4a50) (0xc0013fc8c0) Stream added, broadcasting: 3 I0318 21:53:40.187532 6 log.go:172] (0xc006af4a50) Reply frame received for 3 I0318 21:53:40.187579 6 log.go:172] (0xc006af4a50) (0xc002842fa0) Create stream I0318 21:53:40.187600 6 log.go:172] (0xc006af4a50) (0xc002842fa0) Stream added, broadcasting: 5 I0318 21:53:40.188538 6 log.go:172] (0xc006af4a50) Reply frame received for 5 I0318 21:53:40.288241 6 log.go:172] (0xc006af4a50) Data frame received for 3 I0318 21:53:40.288285 6 log.go:172] (0xc0013fc8c0) (3) Data frame handling I0318 21:53:40.288311 6 log.go:172] (0xc0013fc8c0) (3) Data frame sent I0318 21:53:40.289073 6 log.go:172] (0xc006af4a50) Data frame received for 5 I0318 21:53:40.289101 6 log.go:172] (0xc002842fa0) (5) Data frame handling I0318 21:53:40.289242 6 log.go:172] (0xc006af4a50) Data frame received for 3 I0318 21:53:40.289271 6 log.go:172] (0xc0013fc8c0) (3) Data frame handling I0318 21:53:40.291086 6 log.go:172] (0xc006af4a50) Data frame received for 1 I0318 21:53:40.291102 6 log.go:172] (0xc0019ef400) (1) Data frame handling I0318 21:53:40.291113 6 log.go:172] (0xc0019ef400) (1) Data frame sent I0318 21:53:40.291252 6 log.go:172] (0xc006af4a50) (0xc0019ef400) Stream removed, broadcasting: 1 I0318 21:53:40.291308 6 log.go:172] (0xc006af4a50) (0xc0019ef400) Stream removed, broadcasting: 1 I0318 21:53:40.291317 6 log.go:172] (0xc006af4a50) (0xc0013fc8c0) Stream removed, broadcasting: 3 I0318 21:53:40.291441 6 log.go:172] (0xc006af4a50) (0xc002842fa0) Stream removed, broadcasting: 5 Mar 18 21:53:40.291: INFO: Waiting for responses: map[] I0318 21:53:40.291537 6 log.go:172] (0xc006af4a50) Go away received Mar 18 21:53:40.294: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.83:8080/dial?request=hostname&protocol=udp&host=10.244.2.82&port=8081&tries=1'] Namespace:pod-network-test-3320 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 21:53:40.294: INFO: >>> kubeConfig: /root/.kube/config I0318 21:53:40.323250 6 log.go:172] (0xc006aea630) (0xc0013fcdc0) Create stream I0318 21:53:40.323278 6 log.go:172] (0xc006aea630) (0xc0013fcdc0) Stream added, broadcasting: 1 I0318 21:53:40.325618 6 log.go:172] (0xc006aea630) Reply frame received for 1 I0318 21:53:40.325657 6 log.go:172] (0xc006aea630) (0xc0013fcfa0) Create stream I0318 21:53:40.325672 6 log.go:172] (0xc006aea630) (0xc0013fcfa0) Stream added, broadcasting: 3 I0318 21:53:40.326488 6 log.go:172] (0xc006aea630) Reply frame received for 3 I0318 21:53:40.326531 6 log.go:172] (0xc006aea630) (0xc0013fd040) Create stream I0318 21:53:40.326542 6 log.go:172] (0xc006aea630) (0xc0013fd040) Stream added, broadcasting: 5 I0318 21:53:40.327345 6 log.go:172] (0xc006aea630) Reply frame received for 5 I0318 21:53:40.404055 6 log.go:172] (0xc006aea630) Data frame received for 3 I0318 21:53:40.404087 6 log.go:172] (0xc0013fcfa0) (3) Data frame handling I0318 21:53:40.404096 6 log.go:172] (0xc0013fcfa0) (3) Data frame sent I0318 21:53:40.405625 6 log.go:172] (0xc006aea630) Data frame received for 3 I0318 21:53:40.405665 6 log.go:172] (0xc0013fcfa0) (3) Data frame handling I0318 21:53:40.405697 6 log.go:172] (0xc006aea630) Data frame received for 5 I0318 21:53:40.405735 6 log.go:172] (0xc0013fd040) (5) Data frame handling I0318 21:53:40.407076 6 log.go:172] (0xc006aea630) Data frame received for 1 I0318 21:53:40.407120 6 log.go:172] (0xc0013fcdc0) (1) Data frame handling I0318 21:53:40.407171 6 log.go:172] (0xc0013fcdc0) (1) Data frame sent I0318 21:53:40.407194 6 log.go:172] (0xc006aea630) (0xc0013fcdc0) Stream removed, broadcasting: 1 I0318 21:53:40.407214 6 log.go:172] (0xc006aea630) Go away received I0318 21:53:40.407361 6 log.go:172] (0xc006aea630) (0xc0013fcdc0) Stream removed, broadcasting: 1 I0318 21:53:40.407397 6 log.go:172] (0xc006aea630) (0xc0013fcfa0) Stream removed, broadcasting: 3 I0318 21:53:40.407423 6 log.go:172] (0xc006aea630) (0xc0013fd040) Stream removed, broadcasting: 5 Mar 18 21:53:40.407: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:53:40.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3320" for this suite. • [SLOW TEST:22.783 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2919,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:53:40.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:54:06.507: INFO: Container started at 2020-03-18 21:53:42 +0000 UTC, pod became ready at 2020-03-18 21:54:05 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:54:06.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-363" for this suite. • [SLOW TEST:26.100 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2927,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:54:06.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-42579eca-8ee5-468e-af4e-ef50f29a32ed STEP: Creating a pod to test consume secrets Mar 18 21:54:06.643: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7a10fca-a6e1-41eb-a8fb-33dd42b61aba" in namespace "projected-680" to be "success or failure" Mar 18 21:54:06.671: INFO: Pod "pod-projected-secrets-b7a10fca-a6e1-41eb-a8fb-33dd42b61aba": Phase="Pending", Reason="", readiness=false. Elapsed: 27.9225ms Mar 18 21:54:08.675: INFO: Pod "pod-projected-secrets-b7a10fca-a6e1-41eb-a8fb-33dd42b61aba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032303745s Mar 18 21:54:10.679: INFO: Pod "pod-projected-secrets-b7a10fca-a6e1-41eb-a8fb-33dd42b61aba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036258437s STEP: Saw pod success Mar 18 21:54:10.679: INFO: Pod "pod-projected-secrets-b7a10fca-a6e1-41eb-a8fb-33dd42b61aba" satisfied condition "success or failure" Mar 18 21:54:10.682: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-b7a10fca-a6e1-41eb-a8fb-33dd42b61aba container projected-secret-volume-test: STEP: delete the pod Mar 18 21:54:10.711: INFO: Waiting for pod pod-projected-secrets-b7a10fca-a6e1-41eb-a8fb-33dd42b61aba to disappear Mar 18 21:54:10.740: INFO: Pod pod-projected-secrets-b7a10fca-a6e1-41eb-a8fb-33dd42b61aba no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:54:10.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-680" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2949,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:54:10.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 18 21:54:18.229: INFO: 10 pods remaining Mar 18 21:54:18.229: INFO: 0 pods has nil DeletionTimestamp Mar 18 21:54:18.229: INFO: Mar 18 21:54:19.094: INFO: 0 pods remaining Mar 18 21:54:19.094: INFO: 0 pods has nil DeletionTimestamp Mar 18 21:54:19.094: INFO: STEP: Gathering metrics W0318 21:54:20.324735 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 21:54:20.324: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:54:20.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9249" for this suite. • [SLOW TEST:9.608 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":190,"skipped":2953,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:54:20.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 18 21:54:24.919: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 18 21:54:30.016: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:54:30.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9536" for this suite. • [SLOW TEST:9.678 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":191,"skipped":2962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:54:30.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-cc95479e-6c97-483f-b8f5-aa4ada742832 STEP: Creating a pod to test consume secrets Mar 18 21:54:30.118: INFO: Waiting up to 5m0s for pod "pod-secrets-53991f0c-d85d-4b38-a395-8d9c0a8c573e" in namespace "secrets-5679" to be "success or failure" Mar 18 21:54:30.142: INFO: Pod "pod-secrets-53991f0c-d85d-4b38-a395-8d9c0a8c573e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.038823ms Mar 18 21:54:32.146: INFO: Pod "pod-secrets-53991f0c-d85d-4b38-a395-8d9c0a8c573e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028221592s Mar 18 21:54:34.150: INFO: Pod "pod-secrets-53991f0c-d85d-4b38-a395-8d9c0a8c573e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032604467s STEP: Saw pod success Mar 18 21:54:34.150: INFO: Pod "pod-secrets-53991f0c-d85d-4b38-a395-8d9c0a8c573e" satisfied condition "success or failure" Mar 18 21:54:34.153: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-53991f0c-d85d-4b38-a395-8d9c0a8c573e container secret-volume-test: STEP: delete the pod Mar 18 21:54:34.172: INFO: Waiting for pod pod-secrets-53991f0c-d85d-4b38-a395-8d9c0a8c573e to disappear Mar 18 21:54:34.193: INFO: Pod pod-secrets-53991f0c-d85d-4b38-a395-8d9c0a8c573e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:54:34.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5679" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":2999,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:54:34.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-wb6p STEP: Creating a pod to test atomic-volume-subpath Mar 18 21:54:34.284: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wb6p" in namespace "subpath-4628" to be "success or failure" Mar 18 21:54:34.291: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.897092ms Mar 18 21:54:36.312: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0275855s Mar 18 21:54:38.316: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 4.03223264s Mar 18 21:54:40.321: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 6.036660544s Mar 18 21:54:42.325: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 8.040741033s Mar 18 21:54:44.329: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 10.045061324s Mar 18 21:54:46.334: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 12.04937148s Mar 18 21:54:48.338: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 14.053988535s Mar 18 21:54:50.342: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 16.058034893s Mar 18 21:54:52.346: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 18.06217723s Mar 18 21:54:54.350: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 20.066026621s Mar 18 21:54:56.354: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Running", Reason="", readiness=true. Elapsed: 22.070316547s Mar 18 21:54:58.359: INFO: Pod "pod-subpath-test-configmap-wb6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074694373s STEP: Saw pod success Mar 18 21:54:58.359: INFO: Pod "pod-subpath-test-configmap-wb6p" satisfied condition "success or failure" Mar 18 21:54:58.361: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-wb6p container test-container-subpath-configmap-wb6p: STEP: delete the pod Mar 18 21:54:58.409: INFO: Waiting for pod pod-subpath-test-configmap-wb6p to disappear Mar 18 21:54:58.412: INFO: Pod pod-subpath-test-configmap-wb6p no longer exists STEP: Deleting pod pod-subpath-test-configmap-wb6p Mar 18 21:54:58.412: INFO: Deleting pod "pod-subpath-test-configmap-wb6p" in namespace "subpath-4628" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:54:58.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4628" for this suite. • [SLOW TEST:24.218 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":193,"skipped":3017,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:54:58.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 18 21:54:58.466: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-a 498ea562-88f8-4f6d-8a6b-c6d250e56068 860815 0 2020-03-18 21:54:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 21:54:58.466: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-a 498ea562-88f8-4f6d-8a6b-c6d250e56068 860815 0 2020-03-18 21:54:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 18 21:55:08.474: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-a 498ea562-88f8-4f6d-8a6b-c6d250e56068 860856 0 2020-03-18 21:54:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 18 21:55:08.475: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-a 498ea562-88f8-4f6d-8a6b-c6d250e56068 860856 0 2020-03-18 21:54:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 18 21:55:18.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-a 498ea562-88f8-4f6d-8a6b-c6d250e56068 860886 0 2020-03-18 21:54:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 21:55:18.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-a 498ea562-88f8-4f6d-8a6b-c6d250e56068 860886 0 2020-03-18 21:54:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 18 21:55:28.490: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-a 498ea562-88f8-4f6d-8a6b-c6d250e56068 860916 0 2020-03-18 21:54:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 21:55:28.490: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-a 498ea562-88f8-4f6d-8a6b-c6d250e56068 860916 0 2020-03-18 21:54:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 18 21:55:38.497: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-b 32ca33ff-7e4b-4244-a52b-68fd32bf4191 860946 0 2020-03-18 21:55:38 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 21:55:38.497: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-b 32ca33ff-7e4b-4244-a52b-68fd32bf4191 860946 0 2020-03-18 21:55:38 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 18 21:55:48.503: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-b 32ca33ff-7e4b-4244-a52b-68fd32bf4191 860976 0 2020-03-18 21:55:38 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 21:55:48.503: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7290 /api/v1/namespaces/watch-7290/configmaps/e2e-watch-test-configmap-b 32ca33ff-7e4b-4244-a52b-68fd32bf4191 860976 0 2020-03-18 21:55:38 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:55:58.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7290" for this suite. • [SLOW TEST:60.092 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":194,"skipped":3029,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:55:58.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ffefeb1b-8dc8-4d9a-83f4-20e8a75b8209 STEP: Creating a pod to test consume secrets Mar 18 21:55:58.583: INFO: Waiting up to 5m0s for pod "pod-secrets-2c85ecf8-db17-41b3-95ee-a088afdccd47" in namespace "secrets-6911" to be "success or failure" Mar 18 21:55:58.586: INFO: Pod "pod-secrets-2c85ecf8-db17-41b3-95ee-a088afdccd47": Phase="Pending", Reason="", readiness=false. Elapsed: 3.172341ms Mar 18 21:56:00.589: INFO: Pod "pod-secrets-2c85ecf8-db17-41b3-95ee-a088afdccd47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006615357s Mar 18 21:56:02.605: INFO: Pod "pod-secrets-2c85ecf8-db17-41b3-95ee-a088afdccd47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021753488s STEP: Saw pod success Mar 18 21:56:02.605: INFO: Pod "pod-secrets-2c85ecf8-db17-41b3-95ee-a088afdccd47" satisfied condition "success or failure" Mar 18 21:56:02.607: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2c85ecf8-db17-41b3-95ee-a088afdccd47 container secret-volume-test: STEP: delete the pod Mar 18 21:56:02.642: INFO: Waiting for pod pod-secrets-2c85ecf8-db17-41b3-95ee-a088afdccd47 to disappear Mar 18 21:56:02.646: INFO: Pod pod-secrets-2c85ecf8-db17-41b3-95ee-a088afdccd47 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:02.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6911" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3052,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:02.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-74029ca9-5cc6-41ba-b496-a21c558030c6 STEP: Creating a pod to test consume configMaps Mar 18 21:56:02.743: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-51263f4e-5ee6-431b-820e-53fabf5e1cf5" in namespace "projected-2843" to be "success or failure" Mar 18 21:56:02.772: INFO: Pod "pod-projected-configmaps-51263f4e-5ee6-431b-820e-53fabf5e1cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.003422ms Mar 18 21:56:04.828: INFO: Pod "pod-projected-configmaps-51263f4e-5ee6-431b-820e-53fabf5e1cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084752511s Mar 18 21:56:06.832: INFO: Pod "pod-projected-configmaps-51263f4e-5ee6-431b-820e-53fabf5e1cf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088983009s STEP: Saw pod success Mar 18 21:56:06.832: INFO: Pod "pod-projected-configmaps-51263f4e-5ee6-431b-820e-53fabf5e1cf5" satisfied condition "success or failure" Mar 18 21:56:06.835: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-51263f4e-5ee6-431b-820e-53fabf5e1cf5 container projected-configmap-volume-test: STEP: delete the pod Mar 18 21:56:06.864: INFO: Waiting for pod pod-projected-configmaps-51263f4e-5ee6-431b-820e-53fabf5e1cf5 to disappear Mar 18 21:56:06.906: INFO: Pod pod-projected-configmaps-51263f4e-5ee6-431b-820e-53fabf5e1cf5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:06.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2843" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3058,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:06.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:56:07.487: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:56:09.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165367, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165367, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165367, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165367, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:56:12.599: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:56:12.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:13.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1853" for this suite. STEP: Destroying namespace "webhook-1853-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.962 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":197,"skipped":3065,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:13.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:56:13.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-334b9cf0-b60f-47ba-877d-2cf9f3dbc5bd" in namespace "projected-3955" to be "success or failure" Mar 18 21:56:14.001: INFO: Pod "downwardapi-volume-334b9cf0-b60f-47ba-877d-2cf9f3dbc5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.794645ms Mar 18 21:56:16.004: INFO: Pod "downwardapi-volume-334b9cf0-b60f-47ba-877d-2cf9f3dbc5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006675052s Mar 18 21:56:18.007: INFO: Pod "downwardapi-volume-334b9cf0-b60f-47ba-877d-2cf9f3dbc5bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009515044s STEP: Saw pod success Mar 18 21:56:18.007: INFO: Pod "downwardapi-volume-334b9cf0-b60f-47ba-877d-2cf9f3dbc5bd" satisfied condition "success or failure" Mar 18 21:56:18.010: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-334b9cf0-b60f-47ba-877d-2cf9f3dbc5bd container client-container: STEP: delete the pod Mar 18 21:56:18.038: INFO: Waiting for pod downwardapi-volume-334b9cf0-b60f-47ba-877d-2cf9f3dbc5bd to disappear Mar 18 21:56:18.048: INFO: Pod downwardapi-volume-334b9cf0-b60f-47ba-877d-2cf9f3dbc5bd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:18.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3955" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3072,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:18.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 18 21:56:18.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 18 21:56:18.299: INFO: stderr: "" Mar 18 21:56:18.300: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:18.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2930" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":199,"skipped":3078,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:18.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-0c72c1d9-3ea8-478c-8582-0e13a524dacc STEP: Creating a pod to test consume configMaps Mar 18 21:56:18.392: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f2574a2-9ba2-4811-bbe0-c83c85b2fa22" in namespace "configmap-1887" to be "success or failure" Mar 18 21:56:18.396: INFO: Pod "pod-configmaps-3f2574a2-9ba2-4811-bbe0-c83c85b2fa22": Phase="Pending", Reason="", readiness=false. Elapsed: 3.891901ms Mar 18 21:56:20.400: INFO: Pod "pod-configmaps-3f2574a2-9ba2-4811-bbe0-c83c85b2fa22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007990321s Mar 18 21:56:22.404: INFO: Pod "pod-configmaps-3f2574a2-9ba2-4811-bbe0-c83c85b2fa22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012514522s STEP: Saw pod success Mar 18 21:56:22.404: INFO: Pod "pod-configmaps-3f2574a2-9ba2-4811-bbe0-c83c85b2fa22" satisfied condition "success or failure" Mar 18 21:56:22.408: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-3f2574a2-9ba2-4811-bbe0-c83c85b2fa22 container configmap-volume-test: STEP: delete the pod Mar 18 21:56:22.443: INFO: Waiting for pod pod-configmaps-3f2574a2-9ba2-4811-bbe0-c83c85b2fa22 to disappear Mar 18 21:56:22.449: INFO: Pod pod-configmaps-3f2574a2-9ba2-4811-bbe0-c83c85b2fa22 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:22.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1887" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3093,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:22.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:36.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9986" for this suite. • [SLOW TEST:14.103 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":201,"skipped":3101,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:36.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 21:56:37.424: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 21:56:39.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165397, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165397, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165397, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165397, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 21:56:42.478: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:42.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7086" for this suite. STEP: Destroying namespace "webhook-7086-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.096 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":202,"skipped":3106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:42.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-ad312eec-ef87-40e7-a62e-7bd7a2e0d65e STEP: Creating secret with name secret-projected-all-test-volume-6561c57c-d05f-4308-b3ae-848887213b3a STEP: Creating a pod to test Check all projections for projected volume plugin Mar 18 21:56:42.766: INFO: Waiting up to 5m0s for pod "projected-volume-811b3bed-2d81-4bbf-a625-2708a62fd90d" in namespace "projected-4482" to be "success or failure" Mar 18 21:56:42.769: INFO: Pod "projected-volume-811b3bed-2d81-4bbf-a625-2708a62fd90d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.148185ms Mar 18 21:56:44.786: INFO: Pod "projected-volume-811b3bed-2d81-4bbf-a625-2708a62fd90d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020847508s Mar 18 21:56:46.790: INFO: Pod "projected-volume-811b3bed-2d81-4bbf-a625-2708a62fd90d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024680715s STEP: Saw pod success Mar 18 21:56:46.790: INFO: Pod "projected-volume-811b3bed-2d81-4bbf-a625-2708a62fd90d" satisfied condition "success or failure" Mar 18 21:56:46.793: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-811b3bed-2d81-4bbf-a625-2708a62fd90d container projected-all-volume-test: STEP: delete the pod Mar 18 21:56:47.148: INFO: Waiting for pod projected-volume-811b3bed-2d81-4bbf-a625-2708a62fd90d to disappear Mar 18 21:56:47.181: INFO: Pod projected-volume-811b3bed-2d81-4bbf-a625-2708a62fd90d no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:47.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4482" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3130,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:47.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 21:56:47.452: INFO: Creating deployment "test-recreate-deployment" Mar 18 21:56:47.468: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 18 21:56:47.509: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 18 21:56:49.580: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 18 21:56:49.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165407, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165407, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165407, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165407, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 21:56:51.613: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 18 21:56:51.619: INFO: Updating deployment test-recreate-deployment Mar 18 21:56:51.619: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 18 21:56:52.290: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9990 /apis/apps/v1/namespaces/deployment-9990/deployments/test-recreate-deployment 62d34524-6568-47a3-adfc-cba1d2de7932 861586 2 2020-03-18 21:56:47 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0065bec78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-18 21:56:51 +0000 UTC,LastTransitionTime:2020-03-18 21:56:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-18 21:56:52 +0000 UTC,LastTransitionTime:2020-03-18 21:56:47 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 18 21:56:52.295: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-9990 /apis/apps/v1/namespaces/deployment-9990/replicasets/test-recreate-deployment-5f94c574ff 5fbdbb03-6c4a-44c6-8056-53b24a028029 861582 1 2020-03-18 21:56:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 62d34524-6568-47a3-adfc-cba1d2de7932 0xc0065bf027 0xc0065bf028}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0065bf088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 18 21:56:52.295: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 18 21:56:52.295: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-9990 /apis/apps/v1/namespaces/deployment-9990/replicasets/test-recreate-deployment-799c574856 661566bc-5222-4edb-b5b4-c37619eddfce 861573 2 2020-03-18 21:56:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 62d34524-6568-47a3-adfc-cba1d2de7932 0xc0065bf0f7 0xc0065bf0f8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0065bf168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 18 21:56:52.298: INFO: Pod "test-recreate-deployment-5f94c574ff-6kjk7" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-6kjk7 test-recreate-deployment-5f94c574ff- deployment-9990 /api/v1/namespaces/deployment-9990/pods/test-recreate-deployment-5f94c574ff-6kjk7 0d876647-b992-4acd-b79a-8190ce4d2f79 861588 0 2020-03-18 21:56:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 5fbdbb03-6c4a-44c6-8056-53b24a028029 0xc0065bf5a7 0xc0065bf5a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tkl7l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tkl7l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tkl7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:56:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:56:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:56:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 21:56:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-18 21:56:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:52.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9990" for this suite. • [SLOW TEST:5.119 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":204,"skipped":3145,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:52.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 18 21:56:52.526: INFO: Waiting up to 5m0s for pod "var-expansion-7c3d5b8d-fd57-433d-8140-0e0fa0c0608c" in namespace "var-expansion-3466" to be "success or failure" Mar 18 21:56:52.698: INFO: Pod "var-expansion-7c3d5b8d-fd57-433d-8140-0e0fa0c0608c": Phase="Pending", Reason="", readiness=false. Elapsed: 171.532277ms Mar 18 21:56:54.701: INFO: Pod "var-expansion-7c3d5b8d-fd57-433d-8140-0e0fa0c0608c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174851216s Mar 18 21:56:56.706: INFO: Pod "var-expansion-7c3d5b8d-fd57-433d-8140-0e0fa0c0608c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179001903s STEP: Saw pod success Mar 18 21:56:56.706: INFO: Pod "var-expansion-7c3d5b8d-fd57-433d-8140-0e0fa0c0608c" satisfied condition "success or failure" Mar 18 21:56:56.709: INFO: Trying to get logs from node jerma-worker pod var-expansion-7c3d5b8d-fd57-433d-8140-0e0fa0c0608c container dapi-container: STEP: delete the pod Mar 18 21:56:56.728: INFO: Waiting for pod var-expansion-7c3d5b8d-fd57-433d-8140-0e0fa0c0608c to disappear Mar 18 21:56:56.731: INFO: Pod var-expansion-7c3d5b8d-fd57-433d-8140-0e0fa0c0608c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:56:56.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3466" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3167,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:56:56.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-c78b6e10-f290-4e77-9f79-636512c19a27 STEP: Creating configMap with name cm-test-opt-upd-6218cf88-3cb8-47d6-97d2-406dc8963ae5 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c78b6e10-f290-4e77-9f79-636512c19a27 STEP: Updating configmap cm-test-opt-upd-6218cf88-3cb8-47d6-97d2-406dc8963ae5 STEP: Creating configMap with name cm-test-opt-create-7bdb9852-c34c-4462-9cf4-24b2f24cf55b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:58:13.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2056" for this suite. • [SLOW TEST:76.494 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3181,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:58:13.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:58:18.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3810" for this suite. • [SLOW TEST:5.356 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":207,"skipped":3184,"failed":0} SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:58:18.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-7477 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7477 to expose endpoints map[] Mar 18 21:58:18.949: INFO: Get endpoints failed (12.330865ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 18 21:58:19.953: INFO: successfully validated that service endpoint-test2 in namespace services-7477 exposes endpoints map[] (1.016389896s elapsed) STEP: Creating pod pod1 in namespace services-7477 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7477 to expose endpoints map[pod1:[80]] Mar 18 21:58:23.011: INFO: successfully validated that service endpoint-test2 in namespace services-7477 exposes endpoints map[pod1:[80]] (3.051317162s elapsed) STEP: Creating pod pod2 in namespace services-7477 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7477 to expose endpoints map[pod1:[80] pod2:[80]] Mar 18 21:58:26.148: INFO: successfully validated that service endpoint-test2 in namespace services-7477 exposes endpoints map[pod1:[80] pod2:[80]] (3.132737014s elapsed) STEP: Deleting pod pod1 in namespace services-7477 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7477 to expose endpoints map[pod2:[80]] Mar 18 21:58:27.198: INFO: successfully validated that service endpoint-test2 in namespace services-7477 exposes endpoints map[pod2:[80]] (1.045693761s elapsed) STEP: Deleting pod pod2 in namespace services-7477 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7477 to expose endpoints map[] Mar 18 21:58:27.226: INFO: successfully validated that service endpoint-test2 in namespace services-7477 exposes endpoints map[] (17.92714ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:58:27.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7477" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.677 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":208,"skipped":3187,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:58:27.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 21:58:27.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36d77ead-8f05-4c85-bf8c-4a5a0ca83c2f" in namespace "projected-2970" to be "success or failure" Mar 18 21:58:27.465: INFO: Pod "downwardapi-volume-36d77ead-8f05-4c85-bf8c-4a5a0ca83c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 87.280836ms Mar 18 21:58:29.512: INFO: Pod "downwardapi-volume-36d77ead-8f05-4c85-bf8c-4a5a0ca83c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134402066s Mar 18 21:58:31.523: INFO: Pod "downwardapi-volume-36d77ead-8f05-4c85-bf8c-4a5a0ca83c2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145368292s STEP: Saw pod success Mar 18 21:58:31.523: INFO: Pod "downwardapi-volume-36d77ead-8f05-4c85-bf8c-4a5a0ca83c2f" satisfied condition "success or failure" Mar 18 21:58:31.526: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-36d77ead-8f05-4c85-bf8c-4a5a0ca83c2f container client-container: STEP: delete the pod Mar 18 21:58:31.576: INFO: Waiting for pod downwardapi-volume-36d77ead-8f05-4c85-bf8c-4a5a0ca83c2f to disappear Mar 18 21:58:31.586: INFO: Pod downwardapi-volume-36d77ead-8f05-4c85-bf8c-4a5a0ca83c2f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:58:31.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2970" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:58:31.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-780da4e2-d8ba-4604-96eb-2aaa4437de61 STEP: Creating configMap with name cm-test-opt-upd-ebffea94-cbe1-4e17-900d-200af31b31f4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-780da4e2-d8ba-4604-96eb-2aaa4437de61 STEP: Updating configmap cm-test-opt-upd-ebffea94-cbe1-4e17-900d-200af31b31f4 STEP: Creating configMap with name cm-test-opt-create-073b6b44-8bbc-4cf8-b85e-f45d993b7c93 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:59:50.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1035" for this suite. • [SLOW TEST:78.530 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3233,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:59:50.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:59:50.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-855" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":211,"skipped":3237,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:59:50.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-bb7aba4c-bdf8-4609-af84-b8876e5bbfcc [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:59:50.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8445" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":212,"skipped":3258,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:59:50.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 18 21:59:50.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2252' Mar 18 21:59:50.622: INFO: stderr: "" Mar 18 21:59:50.622: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 18 21:59:51.652: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:59:51.652: INFO: Found 0 / 1 Mar 18 21:59:52.626: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:59:52.626: INFO: Found 0 / 1 Mar 18 21:59:53.626: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:59:53.626: INFO: Found 1 / 1 Mar 18 21:59:53.626: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 18 21:59:53.630: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:59:53.630: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 18 21:59:53.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-knwxd --namespace=kubectl-2252 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 18 21:59:53.723: INFO: stderr: "" Mar 18 21:59:53.723: INFO: stdout: "pod/agnhost-master-knwxd patched\n" STEP: checking annotations Mar 18 21:59:53.743: INFO: Selector matched 1 pods for map[app:agnhost] Mar 18 21:59:53.743: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:59:53.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2252" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":213,"skipped":3268,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:59:53.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 18 21:59:53.805: INFO: Waiting up to 5m0s for pod "downward-api-fed8a0d1-3815-4f3a-a2c7-cdb34ed48211" in namespace "downward-api-3530" to be "success or failure" Mar 18 21:59:53.815: INFO: Pod "downward-api-fed8a0d1-3815-4f3a-a2c7-cdb34ed48211": Phase="Pending", Reason="", readiness=false. Elapsed: 9.643924ms Mar 18 21:59:56.000: INFO: Pod "downward-api-fed8a0d1-3815-4f3a-a2c7-cdb34ed48211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194621301s Mar 18 21:59:58.004: INFO: Pod "downward-api-fed8a0d1-3815-4f3a-a2c7-cdb34ed48211": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199114212s STEP: Saw pod success Mar 18 21:59:58.005: INFO: Pod "downward-api-fed8a0d1-3815-4f3a-a2c7-cdb34ed48211" satisfied condition "success or failure" Mar 18 21:59:58.008: INFO: Trying to get logs from node jerma-worker pod downward-api-fed8a0d1-3815-4f3a-a2c7-cdb34ed48211 container dapi-container: STEP: delete the pod Mar 18 21:59:58.051: INFO: Waiting for pod downward-api-fed8a0d1-3815-4f3a-a2c7-cdb34ed48211 to disappear Mar 18 21:59:58.082: INFO: Pod downward-api-fed8a0d1-3815-4f3a-a2c7-cdb34ed48211 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 21:59:58.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3530" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3274,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 21:59:58.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-7311 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7311 STEP: Deleting pre-stop pod Mar 18 22:00:11.199: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:00:11.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7311" for this suite. • [SLOW TEST:13.132 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":215,"skipped":3276,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:00:11.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:00:11.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1325" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":216,"skipped":3281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:00:11.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-9231/secret-test-81ef7fa1-f76b-4915-9717-3c8468421531 STEP: Creating a pod to test consume secrets Mar 18 22:00:11.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-29cd94b0-7632-42d9-83b5-ce209f332ad2" in namespace "secrets-9231" to be "success or failure" Mar 18 22:00:11.599: INFO: Pod "pod-configmaps-29cd94b0-7632-42d9-83b5-ce209f332ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 58.339864ms Mar 18 22:00:13.603: INFO: Pod "pod-configmaps-29cd94b0-7632-42d9-83b5-ce209f332ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062377505s Mar 18 22:00:15.607: INFO: Pod "pod-configmaps-29cd94b0-7632-42d9-83b5-ce209f332ad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066668223s STEP: Saw pod success Mar 18 22:00:15.607: INFO: Pod "pod-configmaps-29cd94b0-7632-42d9-83b5-ce209f332ad2" satisfied condition "success or failure" Mar 18 22:00:15.610: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-29cd94b0-7632-42d9-83b5-ce209f332ad2 container env-test: STEP: delete the pod Mar 18 22:00:15.649: INFO: Waiting for pod pod-configmaps-29cd94b0-7632-42d9-83b5-ce209f332ad2 to disappear Mar 18 22:00:15.682: INFO: Pod pod-configmaps-29cd94b0-7632-42d9-83b5-ce209f332ad2 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:00:15.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9231" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3330,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:00:15.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 22:00:16.564: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 22:00:18.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165616, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165616, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165616, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165616, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 22:00:21.647: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 22:00:21.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9922-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:00:22.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1043" for this suite. STEP: Destroying namespace "webhook-1043-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.730 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":218,"skipped":3346,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:00:22.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6959/configmap-test-1ea4ba3b-142d-48e2-9d27-92c235c9f6d5 STEP: Creating a pod to test consume configMaps Mar 18 22:00:22.541: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a6cd0c9-bf6b-4c19-b819-1696e5c57430" in namespace "configmap-6959" to be "success or failure" Mar 18 22:00:22.564: INFO: Pod "pod-configmaps-1a6cd0c9-bf6b-4c19-b819-1696e5c57430": Phase="Pending", Reason="", readiness=false. Elapsed: 22.96099ms Mar 18 22:00:24.568: INFO: Pod "pod-configmaps-1a6cd0c9-bf6b-4c19-b819-1696e5c57430": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027348522s Mar 18 22:00:26.572: INFO: Pod "pod-configmaps-1a6cd0c9-bf6b-4c19-b819-1696e5c57430": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031538571s STEP: Saw pod success Mar 18 22:00:26.572: INFO: Pod "pod-configmaps-1a6cd0c9-bf6b-4c19-b819-1696e5c57430" satisfied condition "success or failure" Mar 18 22:00:26.575: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1a6cd0c9-bf6b-4c19-b819-1696e5c57430 container env-test: STEP: delete the pod Mar 18 22:00:26.595: INFO: Waiting for pod pod-configmaps-1a6cd0c9-bf6b-4c19-b819-1696e5c57430 to disappear Mar 18 22:00:26.599: INFO: Pod pod-configmaps-1a6cd0c9-bf6b-4c19-b819-1696e5c57430 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:00:26.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6959" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3347,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:00:26.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-24c66f49-6e47-4fd9-8c61-a7b5bf1b27a7 in namespace container-probe-4859 Mar 18 22:00:30.696: INFO: Started pod liveness-24c66f49-6e47-4fd9-8c61-a7b5bf1b27a7 in namespace container-probe-4859 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 22:00:30.700: INFO: Initial restart count of pod liveness-24c66f49-6e47-4fd9-8c61-a7b5bf1b27a7 is 0 Mar 18 22:00:52.744: INFO: Restart count of pod container-probe-4859/liveness-24c66f49-6e47-4fd9-8c61-a7b5bf1b27a7 is now 1 (22.044678909s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:00:52.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4859" for this suite. • [SLOW TEST:26.211 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3359,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:00:52.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 18 22:00:52.863: INFO: >>> kubeConfig: /root/.kube/config Mar 18 22:00:54.966: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:01:05.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7654" for this suite. • [SLOW TEST:12.589 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":221,"skipped":3367,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:01:05.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-9dtr STEP: Creating a pod to test atomic-volume-subpath Mar 18 22:01:05.470: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9dtr" in namespace "subpath-2738" to be "success or failure" Mar 18 22:01:05.474: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10805ms Mar 18 22:01:07.605: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134907189s Mar 18 22:01:09.609: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 4.139118578s Mar 18 22:01:11.613: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 6.143007276s Mar 18 22:01:13.617: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 8.146913969s Mar 18 22:01:15.621: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 10.150579141s Mar 18 22:01:17.624: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 12.15438171s Mar 18 22:01:19.628: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 14.15834957s Mar 18 22:01:21.632: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 16.162082639s Mar 18 22:01:23.643: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 18.172655179s Mar 18 22:01:25.647: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 20.176862556s Mar 18 22:01:27.651: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Running", Reason="", readiness=true. Elapsed: 22.18099363s Mar 18 22:01:29.655: INFO: Pod "pod-subpath-test-configmap-9dtr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.184945117s STEP: Saw pod success Mar 18 22:01:29.655: INFO: Pod "pod-subpath-test-configmap-9dtr" satisfied condition "success or failure" Mar 18 22:01:29.658: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-9dtr container test-container-subpath-configmap-9dtr: STEP: delete the pod Mar 18 22:01:29.731: INFO: Waiting for pod pod-subpath-test-configmap-9dtr to disappear Mar 18 22:01:29.739: INFO: Pod pod-subpath-test-configmap-9dtr no longer exists STEP: Deleting pod pod-subpath-test-configmap-9dtr Mar 18 22:01:29.739: INFO: Deleting pod "pod-subpath-test-configmap-9dtr" in namespace "subpath-2738" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:01:29.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2738" for this suite. • [SLOW TEST:24.383 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":222,"skipped":3387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:01:29.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 18 22:01:34.423: INFO: Successfully updated pod "labelsupdate720b04df-5baf-4d50-a227-ec20ae2f622d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:01:36.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6521" for this suite. • [SLOW TEST:6.656 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3439,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:01:36.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:01:40.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4411" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3450,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:01:40.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 18 22:01:40.618: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 22:01:40.657: INFO: Waiting for terminating namespaces to be deleted... Mar 18 22:01:40.660: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 18 22:01:40.665: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 22:01:40.665: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 22:01:40.665: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 22:01:40.665: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 22:01:40.665: INFO: busybox-scheduling-c652191c-54c3-4aa2-878b-6be67d0a8ca6 from kubelet-test-4411 started at 2020-03-18 22:01:36 +0000 UTC (1 container statuses recorded) Mar 18 22:01:40.665: INFO: Container busybox-scheduling-c652191c-54c3-4aa2-878b-6be67d0a8ca6 ready: true, restart count 0 Mar 18 22:01:40.665: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 18 22:01:40.669: INFO: labelsupdate720b04df-5baf-4d50-a227-ec20ae2f622d from projected-6521 started at 2020-03-18 22:01:29 +0000 UTC (1 container statuses recorded) Mar 18 22:01:40.669: INFO: Container client-container ready: true, restart count 0 Mar 18 22:01:40.669: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 22:01:40.669: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 22:01:40.669: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 22:01:40.669: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fbd9b6e3-9bc1-453b-8b5f-d7b0968a546c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-fbd9b6e3-9bc1-453b-8b5f-d7b0968a546c off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-fbd9b6e3-9bc1-453b-8b5f-d7b0968a546c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:01:56.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9070" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.307 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":225,"skipped":3450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:01:56.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 18 22:01:56.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-61' Mar 18 22:01:57.256: INFO: stderr: "" Mar 18 22:01:57.256: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 22:01:57.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-61' Mar 18 22:01:57.358: INFO: stderr: "" Mar 18 22:01:57.358: INFO: stdout: "update-demo-nautilus-l8pmx update-demo-nautilus-xkmgx " Mar 18 22:01:57.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8pmx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-61' Mar 18 22:01:57.448: INFO: stderr: "" Mar 18 22:01:57.448: INFO: stdout: "" Mar 18 22:01:57.448: INFO: update-demo-nautilus-l8pmx is created but not running Mar 18 22:02:02.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-61' Mar 18 22:02:02.534: INFO: stderr: "" Mar 18 22:02:02.534: INFO: stdout: "update-demo-nautilus-l8pmx update-demo-nautilus-xkmgx " Mar 18 22:02:02.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8pmx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-61' Mar 18 22:02:02.617: INFO: stderr: "" Mar 18 22:02:02.617: INFO: stdout: "true" Mar 18 22:02:02.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8pmx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-61' Mar 18 22:02:02.705: INFO: stderr: "" Mar 18 22:02:02.705: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 22:02:02.705: INFO: validating pod update-demo-nautilus-l8pmx Mar 18 22:02:02.708: INFO: got data: { "image": "nautilus.jpg" } Mar 18 22:02:02.708: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 22:02:02.708: INFO: update-demo-nautilus-l8pmx is verified up and running Mar 18 22:02:02.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xkmgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-61' Mar 18 22:02:02.798: INFO: stderr: "" Mar 18 22:02:02.798: INFO: stdout: "true" Mar 18 22:02:02.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xkmgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-61' Mar 18 22:02:02.886: INFO: stderr: "" Mar 18 22:02:02.886: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 22:02:02.886: INFO: validating pod update-demo-nautilus-xkmgx Mar 18 22:02:02.890: INFO: got data: { "image": "nautilus.jpg" } Mar 18 22:02:02.890: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 22:02:02.890: INFO: update-demo-nautilus-xkmgx is verified up and running STEP: using delete to clean up resources Mar 18 22:02:02.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-61' Mar 18 22:02:02.992: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 22:02:02.993: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 18 22:02:02.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-61' Mar 18 22:02:03.090: INFO: stderr: "No resources found in kubectl-61 namespace.\n" Mar 18 22:02:03.091: INFO: stdout: "" Mar 18 22:02:03.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-61 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 22:02:03.177: INFO: stderr: "" Mar 18 22:02:03.177: INFO: stdout: "update-demo-nautilus-l8pmx\nupdate-demo-nautilus-xkmgx\n" Mar 18 22:02:03.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-61' Mar 18 22:02:03.775: INFO: stderr: "No resources found in kubectl-61 namespace.\n" Mar 18 22:02:03.775: INFO: stdout: "" Mar 18 22:02:03.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-61 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 22:02:03.883: INFO: stderr: "" Mar 18 22:02:03.883: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:02:03.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-61" for this suite. • [SLOW TEST:7.002 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":226,"skipped":3482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:02:03.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2971 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 18 22:02:04.053: INFO: Found 0 stateful pods, waiting for 3 Mar 18 22:02:14.058: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:02:14.058: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:02:14.058: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 18 22:02:24.058: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:02:24.058: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:02:24.058: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:02:24.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2971 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 22:02:24.340: INFO: stderr: "I0318 22:02:24.202604 3281 log.go:172] (0xc0009ced10) (0xc000a38280) Create stream\nI0318 22:02:24.202672 3281 log.go:172] (0xc0009ced10) (0xc000a38280) Stream added, broadcasting: 1\nI0318 22:02:24.207875 3281 log.go:172] (0xc0009ced10) Reply frame received for 1\nI0318 22:02:24.207937 3281 log.go:172] (0xc0009ced10) (0xc000627540) Create stream\nI0318 22:02:24.207963 3281 log.go:172] (0xc0009ced10) (0xc000627540) Stream added, broadcasting: 3\nI0318 22:02:24.209348 3281 log.go:172] (0xc0009ced10) Reply frame received for 3\nI0318 22:02:24.209388 3281 log.go:172] (0xc0009ced10) (0xc000755a40) Create stream\nI0318 22:02:24.209407 3281 log.go:172] (0xc0009ced10) (0xc000755a40) Stream added, broadcasting: 5\nI0318 22:02:24.210443 3281 log.go:172] (0xc0009ced10) Reply frame received for 5\nI0318 22:02:24.292048 3281 log.go:172] (0xc0009ced10) Data frame received for 5\nI0318 22:02:24.292077 3281 log.go:172] (0xc000755a40) (5) Data frame handling\nI0318 22:02:24.292099 3281 log.go:172] (0xc000755a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 22:02:24.333082 3281 log.go:172] (0xc0009ced10) Data frame received for 3\nI0318 22:02:24.333293 3281 log.go:172] (0xc000627540) (3) Data frame handling\nI0318 22:02:24.333352 3281 log.go:172] (0xc000627540) (3) Data frame sent\nI0318 22:02:24.333534 3281 log.go:172] (0xc0009ced10) Data frame received for 3\nI0318 22:02:24.333569 3281 log.go:172] (0xc000627540) (3) Data frame handling\nI0318 22:02:24.333602 3281 log.go:172] (0xc0009ced10) Data frame received for 5\nI0318 22:02:24.333636 3281 log.go:172] (0xc000755a40) (5) Data frame handling\nI0318 22:02:24.335604 3281 log.go:172] (0xc0009ced10) Data frame received for 1\nI0318 22:02:24.335637 3281 log.go:172] (0xc000a38280) (1) Data frame handling\nI0318 22:02:24.335658 3281 log.go:172] (0xc000a38280) (1) Data frame sent\nI0318 22:02:24.335680 3281 log.go:172] (0xc0009ced10) (0xc000a38280) Stream removed, broadcasting: 1\nI0318 22:02:24.335697 3281 log.go:172] (0xc0009ced10) Go away received\nI0318 22:02:24.336162 3281 log.go:172] (0xc0009ced10) (0xc000a38280) Stream removed, broadcasting: 1\nI0318 22:02:24.336196 3281 log.go:172] (0xc0009ced10) (0xc000627540) Stream removed, broadcasting: 3\nI0318 22:02:24.336209 3281 log.go:172] (0xc0009ced10) (0xc000755a40) Stream removed, broadcasting: 5\n" Mar 18 22:02:24.340: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 22:02:24.340: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 18 22:02:34.370: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 18 22:02:44.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2971 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 18 22:02:44.634: INFO: stderr: "I0318 22:02:44.533604 3303 log.go:172] (0xc000980000) (0xc0006b86e0) Create stream\nI0318 22:02:44.533668 3303 log.go:172] (0xc000980000) (0xc0006b86e0) Stream added, broadcasting: 1\nI0318 22:02:44.535303 3303 log.go:172] (0xc000980000) Reply frame received for 1\nI0318 22:02:44.535333 3303 log.go:172] (0xc000980000) (0xc00071dc20) Create stream\nI0318 22:02:44.535341 3303 log.go:172] (0xc000980000) (0xc00071dc20) Stream added, broadcasting: 3\nI0318 22:02:44.536288 3303 log.go:172] (0xc000980000) Reply frame received for 3\nI0318 22:02:44.536345 3303 log.go:172] (0xc000980000) (0xc00071dcc0) Create stream\nI0318 22:02:44.536361 3303 log.go:172] (0xc000980000) (0xc00071dcc0) Stream added, broadcasting: 5\nI0318 22:02:44.537353 3303 log.go:172] (0xc000980000) Reply frame received for 5\nI0318 22:02:44.627987 3303 log.go:172] (0xc000980000) Data frame received for 5\nI0318 22:02:44.628044 3303 log.go:172] (0xc00071dcc0) (5) Data frame handling\nI0318 22:02:44.628068 3303 log.go:172] (0xc00071dcc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0318 22:02:44.628099 3303 log.go:172] (0xc000980000) Data frame received for 3\nI0318 22:02:44.628117 3303 log.go:172] (0xc00071dc20) (3) Data frame handling\nI0318 22:02:44.628160 3303 log.go:172] (0xc00071dc20) (3) Data frame sent\nI0318 22:02:44.628181 3303 log.go:172] (0xc000980000) Data frame received for 3\nI0318 22:02:44.628198 3303 log.go:172] (0xc00071dc20) (3) Data frame handling\nI0318 22:02:44.628416 3303 log.go:172] (0xc000980000) Data frame received for 5\nI0318 22:02:44.628446 3303 log.go:172] (0xc00071dcc0) (5) Data frame handling\nI0318 22:02:44.630020 3303 log.go:172] (0xc000980000) Data frame received for 1\nI0318 22:02:44.630047 3303 log.go:172] (0xc0006b86e0) (1) Data frame handling\nI0318 22:02:44.630085 3303 log.go:172] (0xc0006b86e0) (1) Data frame sent\nI0318 22:02:44.630105 3303 log.go:172] (0xc000980000) (0xc0006b86e0) Stream removed, broadcasting: 1\nI0318 22:02:44.630317 3303 log.go:172] (0xc000980000) Go away received\nI0318 22:02:44.630506 3303 log.go:172] (0xc000980000) (0xc0006b86e0) Stream removed, broadcasting: 1\nI0318 22:02:44.630527 3303 log.go:172] (0xc000980000) (0xc00071dc20) Stream removed, broadcasting: 3\nI0318 22:02:44.630539 3303 log.go:172] (0xc000980000) (0xc00071dcc0) Stream removed, broadcasting: 5\n" Mar 18 22:02:44.634: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 18 22:02:44.634: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 18 22:03:04.880: INFO: Waiting for StatefulSet statefulset-2971/ss2 to complete update Mar 18 22:03:04.880: INFO: Waiting for Pod statefulset-2971/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 18 22:03:14.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2971 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 22:03:17.481: INFO: stderr: "I0318 22:03:17.356697 3323 log.go:172] (0xc000b34000) (0xc0006ff900) Create stream\nI0318 22:03:17.356726 3323 log.go:172] (0xc000b34000) (0xc0006ff900) Stream added, broadcasting: 1\nI0318 22:03:17.359106 3323 log.go:172] (0xc000b34000) Reply frame received for 1\nI0318 22:03:17.359148 3323 log.go:172] (0xc000b34000) (0xc0004dc460) Create stream\nI0318 22:03:17.359160 3323 log.go:172] (0xc000b34000) (0xc0004dc460) Stream added, broadcasting: 3\nI0318 22:03:17.360038 3323 log.go:172] (0xc000b34000) Reply frame received for 3\nI0318 22:03:17.360070 3323 log.go:172] (0xc000b34000) (0xc0006fa1e0) Create stream\nI0318 22:03:17.360084 3323 log.go:172] (0xc000b34000) (0xc0006fa1e0) Stream added, broadcasting: 5\nI0318 22:03:17.361008 3323 log.go:172] (0xc000b34000) Reply frame received for 5\nI0318 22:03:17.420802 3323 log.go:172] (0xc000b34000) Data frame received for 5\nI0318 22:03:17.420831 3323 log.go:172] (0xc0006fa1e0) (5) Data frame handling\nI0318 22:03:17.420850 3323 log.go:172] (0xc0006fa1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 22:03:17.474485 3323 log.go:172] (0xc000b34000) Data frame received for 3\nI0318 22:03:17.474518 3323 log.go:172] (0xc0004dc460) (3) Data frame handling\nI0318 22:03:17.474543 3323 log.go:172] (0xc0004dc460) (3) Data frame sent\nI0318 22:03:17.474560 3323 log.go:172] (0xc000b34000) Data frame received for 3\nI0318 22:03:17.474578 3323 log.go:172] (0xc0004dc460) (3) Data frame handling\nI0318 22:03:17.474613 3323 log.go:172] (0xc000b34000) Data frame received for 5\nI0318 22:03:17.474627 3323 log.go:172] (0xc0006fa1e0) (5) Data frame handling\nI0318 22:03:17.476340 3323 log.go:172] (0xc000b34000) Data frame received for 1\nI0318 22:03:17.476367 3323 log.go:172] (0xc0006ff900) (1) Data frame handling\nI0318 22:03:17.476388 3323 log.go:172] (0xc0006ff900) (1) Data frame sent\nI0318 22:03:17.476403 3323 log.go:172] (0xc000b34000) (0xc0006ff900) Stream removed, broadcasting: 1\nI0318 22:03:17.476422 3323 log.go:172] (0xc000b34000) Go away received\nI0318 22:03:17.476821 3323 log.go:172] (0xc000b34000) (0xc0006ff900) Stream removed, broadcasting: 1\nI0318 22:03:17.476840 3323 log.go:172] (0xc000b34000) (0xc0004dc460) Stream removed, broadcasting: 3\nI0318 22:03:17.476849 3323 log.go:172] (0xc000b34000) (0xc0006fa1e0) Stream removed, broadcasting: 5\n" Mar 18 22:03:17.481: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 22:03:17.481: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 18 22:03:27.514: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 18 22:03:37.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2971 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 18 22:03:37.749: INFO: stderr: "I0318 22:03:37.674514 3359 log.go:172] (0xc00058ea50) (0xc000544000) Create stream\nI0318 22:03:37.674590 3359 log.go:172] (0xc00058ea50) (0xc000544000) Stream added, broadcasting: 1\nI0318 22:03:37.677423 3359 log.go:172] (0xc00058ea50) Reply frame received for 1\nI0318 22:03:37.677472 3359 log.go:172] (0xc00058ea50) (0xc0009f2000) Create stream\nI0318 22:03:37.677488 3359 log.go:172] (0xc00058ea50) (0xc0009f2000) Stream added, broadcasting: 3\nI0318 22:03:37.678357 3359 log.go:172] (0xc00058ea50) Reply frame received for 3\nI0318 22:03:37.678381 3359 log.go:172] (0xc00058ea50) (0xc0005f5ae0) Create stream\nI0318 22:03:37.678394 3359 log.go:172] (0xc00058ea50) (0xc0005f5ae0) Stream added, broadcasting: 5\nI0318 22:03:37.679175 3359 log.go:172] (0xc00058ea50) Reply frame received for 5\nI0318 22:03:37.739876 3359 log.go:172] (0xc00058ea50) Data frame received for 3\nI0318 22:03:37.739919 3359 log.go:172] (0xc0009f2000) (3) Data frame handling\nI0318 22:03:37.739935 3359 log.go:172] (0xc0009f2000) (3) Data frame sent\nI0318 22:03:37.739947 3359 log.go:172] (0xc00058ea50) Data frame received for 3\nI0318 22:03:37.739958 3359 log.go:172] (0xc0009f2000) (3) Data frame handling\nI0318 22:03:37.739973 3359 log.go:172] (0xc00058ea50) Data frame received for 5\nI0318 22:03:37.739983 3359 log.go:172] (0xc0005f5ae0) (5) Data frame handling\nI0318 22:03:37.739998 3359 log.go:172] (0xc0005f5ae0) (5) Data frame sent\nI0318 22:03:37.740010 3359 log.go:172] (0xc00058ea50) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0318 22:03:37.740020 3359 log.go:172] (0xc0005f5ae0) (5) Data frame handling\nI0318 22:03:37.742294 3359 log.go:172] (0xc00058ea50) Data frame received for 1\nI0318 22:03:37.742328 3359 log.go:172] (0xc000544000) (1) Data frame handling\nI0318 22:03:37.742371 3359 log.go:172] (0xc000544000) (1) Data frame sent\nI0318 22:03:37.742435 3359 log.go:172] (0xc00058ea50) (0xc000544000) Stream removed, broadcasting: 1\nI0318 22:03:37.742486 3359 log.go:172] (0xc00058ea50) Go away received\nI0318 22:03:37.742933 3359 log.go:172] (0xc00058ea50) (0xc000544000) Stream removed, broadcasting: 1\nI0318 22:03:37.742971 3359 log.go:172] (0xc00058ea50) (0xc0009f2000) Stream removed, broadcasting: 3\nI0318 22:03:37.742988 3359 log.go:172] (0xc00058ea50) (0xc0005f5ae0) Stream removed, broadcasting: 5\n" Mar 18 22:03:37.749: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 18 22:03:37.749: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 18 22:03:47.768: INFO: Waiting for StatefulSet statefulset-2971/ss2 to complete update Mar 18 22:03:47.768: INFO: Waiting for Pod statefulset-2971/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 18 22:03:47.768: INFO: Waiting for Pod statefulset-2971/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 18 22:03:47.768: INFO: Waiting for Pod statefulset-2971/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 18 22:03:57.777: INFO: Waiting for StatefulSet statefulset-2971/ss2 to complete update Mar 18 22:03:57.777: INFO: Waiting for Pod statefulset-2971/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 18 22:03:57.777: INFO: Waiting for Pod statefulset-2971/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 18 22:04:07.776: INFO: Waiting for StatefulSet statefulset-2971/ss2 to complete update Mar 18 22:04:07.776: INFO: Waiting for Pod statefulset-2971/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 18 22:04:17.776: INFO: Deleting all statefulset in ns statefulset-2971 Mar 18 22:04:17.780: INFO: Scaling statefulset ss2 to 0 Mar 18 22:04:27.820: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 22:04:27.822: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:04:27.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2971" for this suite. • [SLOW TEST:143.979 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":227,"skipped":3508,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:04:27.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 18 22:04:27.937: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:04:35.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9505" for this suite. • [SLOW TEST:7.528 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":228,"skipped":3510,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:04:35.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 18 22:04:35.462: INFO: Waiting up to 5m0s for pod "var-expansion-267f1d24-6f9f-4f72-8ae5-e7145b3abe6e" in namespace "var-expansion-5165" to be "success or failure" Mar 18 22:04:35.578: INFO: Pod "var-expansion-267f1d24-6f9f-4f72-8ae5-e7145b3abe6e": Phase="Pending", Reason="", readiness=false. Elapsed: 115.558815ms Mar 18 22:04:37.582: INFO: Pod "var-expansion-267f1d24-6f9f-4f72-8ae5-e7145b3abe6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11958882s Mar 18 22:04:39.586: INFO: Pod "var-expansion-267f1d24-6f9f-4f72-8ae5-e7145b3abe6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123777665s STEP: Saw pod success Mar 18 22:04:39.586: INFO: Pod "var-expansion-267f1d24-6f9f-4f72-8ae5-e7145b3abe6e" satisfied condition "success or failure" Mar 18 22:04:39.589: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-267f1d24-6f9f-4f72-8ae5-e7145b3abe6e container dapi-container: STEP: delete the pod Mar 18 22:04:39.685: INFO: Waiting for pod var-expansion-267f1d24-6f9f-4f72-8ae5-e7145b3abe6e to disappear Mar 18 22:04:39.688: INFO: Pod var-expansion-267f1d24-6f9f-4f72-8ae5-e7145b3abe6e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:04:39.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5165" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3528,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:04:39.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 22:04:39.737: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 18 22:04:39.759: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 18 22:04:44.769: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 18 22:04:44.769: INFO: Creating deployment "test-rolling-update-deployment" Mar 18 22:04:44.772: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 18 22:04:44.783: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 18 22:04:46.791: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 18 22:04:46.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165884, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165884, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165884, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720165884, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 22:04:48.799: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 18 22:04:48.810: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2664 /apis/apps/v1/namespaces/deployment-2664/deployments/test-rolling-update-deployment 28dde2c2-6d85-4d12-ae72-f24eccea6c81 864328 1 2020-03-18 22:04:44 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00377f0b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-18 22:04:44 +0000 UTC,LastTransitionTime:2020-03-18 22:04:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-18 22:04:47 +0000 UTC,LastTransitionTime:2020-03-18 22:04:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 18 22:04:48.814: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2664 /apis/apps/v1/namespaces/deployment-2664/replicasets/test-rolling-update-deployment-67cf4f6444 635c7adc-f18a-4d0a-94cb-dcfbe0344162 864316 1 2020-03-18 22:04:44 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 28dde2c2-6d85-4d12-ae72-f24eccea6c81 0xc00377f557 0xc00377f558}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00377f5c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 18 22:04:48.814: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 18 22:04:48.814: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2664 /apis/apps/v1/namespaces/deployment-2664/replicasets/test-rolling-update-controller 4d6625a0-b785-4e3a-88ba-653bc0f59dbf 864327 2 2020-03-18 22:04:39 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 28dde2c2-6d85-4d12-ae72-f24eccea6c81 0xc00377f487 0xc00377f488}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00377f4e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 18 22:04:48.818: INFO: Pod "test-rolling-update-deployment-67cf4f6444-g4mgg" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-g4mgg test-rolling-update-deployment-67cf4f6444- deployment-2664 /api/v1/namespaces/deployment-2664/pods/test-rolling-update-deployment-67cf4f6444-g4mgg d2ca71dc-0c95-4fb1-9eda-73348fb7f5e8 864315 0 2020-03-18 22:04:44 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 635c7adc-f18a-4d0a-94cb-dcfbe0344162 0xc00377fa37 0xc00377fa38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2kf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2kf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2kf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 22:04:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 22:04:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 22:04:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 22:04:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.87,StartTime:2020-03-18 22:04:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 22:04:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://b63ee37cfbaf6ae8466fcd53be866201727bfa4f1d21c60886b4cc60450b16b5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.87,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:04:48.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2664" for this suite. • [SLOW TEST:9.133 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":230,"skipped":3534,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:04:48.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 18 22:04:48.897: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 18 22:05:00.168: INFO: >>> kubeConfig: /root/.kube/config Mar 18 22:05:02.066: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:05:12.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4150" for this suite. • [SLOW TEST:23.634 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":231,"skipped":3549,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:05:12.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 18 22:05:17.060: INFO: Successfully updated pod "pod-update-activedeadlineseconds-59dd749f-26e4-4501-8ebc-bb079bfc8adc" Mar 18 22:05:17.060: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-59dd749f-26e4-4501-8ebc-bb079bfc8adc" in namespace "pods-1875" to be "terminated due to deadline exceeded" Mar 18 22:05:17.064: INFO: Pod "pod-update-activedeadlineseconds-59dd749f-26e4-4501-8ebc-bb079bfc8adc": Phase="Running", Reason="", readiness=true. Elapsed: 3.452778ms Mar 18 22:05:19.068: INFO: Pod "pod-update-activedeadlineseconds-59dd749f-26e4-4501-8ebc-bb079bfc8adc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007248045s Mar 18 22:05:19.068: INFO: Pod "pod-update-activedeadlineseconds-59dd749f-26e4-4501-8ebc-bb079bfc8adc" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:05:19.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1875" for this suite. • [SLOW TEST:6.615 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3554,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:05:19.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:05:32.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3848" for this suite. • [SLOW TEST:13.149 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":233,"skipped":3571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:05:32.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b2b336ed-f242-4a62-af27-97c7328fa7b1 STEP: Creating a pod to test consume secrets Mar 18 22:05:32.385: INFO: Waiting up to 5m0s for pod "pod-secrets-b789093b-7ce2-4c03-840e-92a060c74de5" in namespace "secrets-9989" to be "success or failure" Mar 18 22:05:32.410: INFO: Pod "pod-secrets-b789093b-7ce2-4c03-840e-92a060c74de5": Phase="Pending", Reason="", readiness=false. Elapsed: 25.403549ms Mar 18 22:05:34.414: INFO: Pod "pod-secrets-b789093b-7ce2-4c03-840e-92a060c74de5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029142591s Mar 18 22:05:36.418: INFO: Pod "pod-secrets-b789093b-7ce2-4c03-840e-92a060c74de5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032847737s STEP: Saw pod success Mar 18 22:05:36.418: INFO: Pod "pod-secrets-b789093b-7ce2-4c03-840e-92a060c74de5" satisfied condition "success or failure" Mar 18 22:05:36.421: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b789093b-7ce2-4c03-840e-92a060c74de5 container secret-volume-test: STEP: delete the pod Mar 18 22:05:36.448: INFO: Waiting for pod pod-secrets-b789093b-7ce2-4c03-840e-92a060c74de5 to disappear Mar 18 22:05:36.453: INFO: Pod pod-secrets-b789093b-7ce2-4c03-840e-92a060c74de5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:05:36.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9989" for this suite. STEP: Destroying namespace "secret-namespace-6495" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3613,"failed":0} SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:05:36.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-c74eddc0-77df-4d16-8aeb-3286b693a2d8 STEP: Creating secret with name s-test-opt-upd-52c4bfe5-2a89-48f6-a914-0c1cfcbc2c8b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c74eddc0-77df-4d16-8aeb-3286b693a2d8 STEP: Updating secret s-test-opt-upd-52c4bfe5-2a89-48f6-a914-0c1cfcbc2c8b STEP: Creating secret with name s-test-opt-create-f14746ea-6445-4d29-9541-b1a723e2b5f4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:07:15.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9668" for this suite. • [SLOW TEST:98.608 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3615,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:07:15.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:08:15.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5016" for this suite. • [SLOW TEST:60.074 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3616,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:08:15.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:08:15.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6903" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":237,"skipped":3625,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:08:15.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 22:08:15.404: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ca734f6-0f48-4b76-a774-f68e67c94226" in namespace "downward-api-8460" to be "success or failure" Mar 18 22:08:15.409: INFO: Pod "downwardapi-volume-9ca734f6-0f48-4b76-a774-f68e67c94226": Phase="Pending", Reason="", readiness=false. Elapsed: 4.406859ms Mar 18 22:08:17.443: INFO: Pod "downwardapi-volume-9ca734f6-0f48-4b76-a774-f68e67c94226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038556165s Mar 18 22:08:19.449: INFO: Pod "downwardapi-volume-9ca734f6-0f48-4b76-a774-f68e67c94226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044880857s STEP: Saw pod success Mar 18 22:08:19.449: INFO: Pod "downwardapi-volume-9ca734f6-0f48-4b76-a774-f68e67c94226" satisfied condition "success or failure" Mar 18 22:08:19.452: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9ca734f6-0f48-4b76-a774-f68e67c94226 container client-container: STEP: delete the pod Mar 18 22:08:19.504: INFO: Waiting for pod downwardapi-volume-9ca734f6-0f48-4b76-a774-f68e67c94226 to disappear Mar 18 22:08:19.517: INFO: Pod downwardapi-volume-9ca734f6-0f48-4b76-a774-f68e67c94226 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:08:19.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8460" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:08:19.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-28ebd9b1-55ee-4bad-aa87-d18a8c307357 STEP: Creating a pod to test consume configMaps Mar 18 22:08:19.596: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d425b18-0c3c-4ceb-9da9-302a8321350d" in namespace "projected-1644" to be "success or failure" Mar 18 22:08:19.601: INFO: Pod "pod-projected-configmaps-9d425b18-0c3c-4ceb-9da9-302a8321350d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154416ms Mar 18 22:08:21.605: INFO: Pod "pod-projected-configmaps-9d425b18-0c3c-4ceb-9da9-302a8321350d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008150856s Mar 18 22:08:23.608: INFO: Pod "pod-projected-configmaps-9d425b18-0c3c-4ceb-9da9-302a8321350d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011840158s STEP: Saw pod success Mar 18 22:08:23.608: INFO: Pod "pod-projected-configmaps-9d425b18-0c3c-4ceb-9da9-302a8321350d" satisfied condition "success or failure" Mar 18 22:08:23.611: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-9d425b18-0c3c-4ceb-9da9-302a8321350d container projected-configmap-volume-test: STEP: delete the pod Mar 18 22:08:23.643: INFO: Waiting for pod pod-projected-configmaps-9d425b18-0c3c-4ceb-9da9-302a8321350d to disappear Mar 18 22:08:23.648: INFO: Pod pod-projected-configmaps-9d425b18-0c3c-4ceb-9da9-302a8321350d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:08:23.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1644" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:08:23.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0318 22:08:54.275861 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 22:08:54.275: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:08:54.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-765" for this suite. • [SLOW TEST:30.625 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":240,"skipped":3725,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:08:54.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 18 22:08:54.340: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 18 22:08:54.823: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 18 22:08:57.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166134, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166134, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166134, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166134, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 22:09:00.002: INFO: Waited 638.498634ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:09:00.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4468" for this suite. • [SLOW TEST:6.385 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":241,"skipped":3728,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:09:00.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 18 22:09:00.837: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:00.849: INFO: Number of nodes with available pods: 0 Mar 18 22:09:00.849: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:01.855: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:01.858: INFO: Number of nodes with available pods: 0 Mar 18 22:09:01.858: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:02.924: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:02.927: INFO: Number of nodes with available pods: 0 Mar 18 22:09:02.927: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:03.853: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:03.856: INFO: Number of nodes with available pods: 1 Mar 18 22:09:03.856: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:04.853: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:04.856: INFO: Number of nodes with available pods: 2 Mar 18 22:09:04.856: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 18 22:09:04.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:04.917: INFO: Number of nodes with available pods: 1 Mar 18 22:09:04.917: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:05.921: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:05.924: INFO: Number of nodes with available pods: 1 Mar 18 22:09:05.924: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:06.922: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:06.925: INFO: Number of nodes with available pods: 1 Mar 18 22:09:06.925: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:07.922: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:07.926: INFO: Number of nodes with available pods: 1 Mar 18 22:09:07.926: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:08.922: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:08.927: INFO: Number of nodes with available pods: 1 Mar 18 22:09:08.927: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:09.923: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:09.927: INFO: Number of nodes with available pods: 1 Mar 18 22:09:09.927: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:10.923: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:10.927: INFO: Number of nodes with available pods: 1 Mar 18 22:09:10.927: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:11.922: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:11.926: INFO: Number of nodes with available pods: 1 Mar 18 22:09:11.926: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:12.923: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:12.927: INFO: Number of nodes with available pods: 2 Mar 18 22:09:12.927: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6330, will wait for the garbage collector to delete the pods Mar 18 22:09:12.987: INFO: Deleting DaemonSet.extensions daemon-set took: 5.773124ms Mar 18 22:09:13.088: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.201981ms Mar 18 22:09:19.591: INFO: Number of nodes with available pods: 0 Mar 18 22:09:19.591: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 22:09:19.595: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6330/daemonsets","resourceVersion":"865595"},"items":null} Mar 18 22:09:19.598: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6330/pods","resourceVersion":"865595"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:09:19.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6330" for this suite. • [SLOW TEST:18.989 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":242,"skipped":3729,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:09:19.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6105.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6105.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6105.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6105.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6105.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6105.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 22:09:25.774: INFO: DNS probes using dns-6105/dns-test-835e67da-6b72-434b-8bcb-5d1ce0a9b77a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:09:25.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6105" for this suite. • [SLOW TEST:6.193 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":243,"skipped":3748,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:09:25.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 22:09:26.804: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 22:09:28.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166166, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166166, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166166, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166166, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 22:09:31.835: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:09:31.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-475" for this suite. STEP: Destroying namespace "webhook-475-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.124 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":244,"skipped":3759,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:09:31.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 22:09:32.064: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 18 22:09:32.072: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:32.077: INFO: Number of nodes with available pods: 0 Mar 18 22:09:32.077: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:33.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:33.087: INFO: Number of nodes with available pods: 0 Mar 18 22:09:33.087: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:34.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:34.086: INFO: Number of nodes with available pods: 0 Mar 18 22:09:34.086: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:35.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:35.085: INFO: Number of nodes with available pods: 1 Mar 18 22:09:35.085: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:09:36.085: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:36.088: INFO: Number of nodes with available pods: 2 Mar 18 22:09:36.088: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 18 22:09:36.114: INFO: Wrong image for pod: daemon-set-tmtd7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:36.114: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:36.133: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:37.144: INFO: Wrong image for pod: daemon-set-tmtd7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:37.144: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:37.175: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:38.229: INFO: Wrong image for pod: daemon-set-tmtd7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:38.229: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:38.233: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:39.138: INFO: Wrong image for pod: daemon-set-tmtd7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:39.138: INFO: Pod daemon-set-tmtd7 is not available Mar 18 22:09:39.138: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:39.143: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:40.137: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:40.137: INFO: Pod daemon-set-zjlcw is not available Mar 18 22:09:40.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:41.137: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:41.137: INFO: Pod daemon-set-zjlcw is not available Mar 18 22:09:41.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:42.137: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:42.137: INFO: Pod daemon-set-zjlcw is not available Mar 18 22:09:42.140: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:43.137: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:43.140: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:44.457: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:44.457: INFO: Pod daemon-set-xfdv6 is not available Mar 18 22:09:44.461: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:45.137: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:45.137: INFO: Pod daemon-set-xfdv6 is not available Mar 18 22:09:45.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:46.137: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:46.137: INFO: Pod daemon-set-xfdv6 is not available Mar 18 22:09:46.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:47.137: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:47.137: INFO: Pod daemon-set-xfdv6 is not available Mar 18 22:09:47.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:48.138: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:48.138: INFO: Pod daemon-set-xfdv6 is not available Mar 18 22:09:48.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:49.137: INFO: Wrong image for pod: daemon-set-xfdv6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 18 22:09:49.137: INFO: Pod daemon-set-xfdv6 is not available Mar 18 22:09:49.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:50.137: INFO: Pod daemon-set-2qkbf is not available Mar 18 22:09:50.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 18 22:09:50.146: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:50.150: INFO: Number of nodes with available pods: 1 Mar 18 22:09:50.150: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 22:09:51.229: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:51.233: INFO: Number of nodes with available pods: 1 Mar 18 22:09:51.233: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 22:09:52.154: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:52.158: INFO: Number of nodes with available pods: 1 Mar 18 22:09:52.158: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 22:09:53.153: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:53.167: INFO: Number of nodes with available pods: 1 Mar 18 22:09:53.167: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 22:09:54.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:09:54.158: INFO: Number of nodes with available pods: 2 Mar 18 22:09:54.159: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9883, will wait for the garbage collector to delete the pods Mar 18 22:09:54.230: INFO: Deleting DaemonSet.extensions daemon-set took: 5.379009ms Mar 18 22:09:54.330: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.236097ms Mar 18 22:09:59.734: INFO: Number of nodes with available pods: 0 Mar 18 22:09:59.735: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 22:09:59.737: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9883/daemonsets","resourceVersion":"865925"},"items":null} Mar 18 22:09:59.739: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9883/pods","resourceVersion":"865925"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:09:59.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9883" for this suite. • [SLOW TEST:27.778 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":245,"skipped":3765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:09:59.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 18 22:09:59.826: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 22:10:00.314: INFO: Waiting for terminating namespaces to be deleted... Mar 18 22:10:00.316: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 18 22:10:00.330: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 22:10:00.330: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 22:10:00.331: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 22:10:00.331: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 22:10:00.331: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 18 22:10:00.344: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 22:10:00.344: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 22:10:00.344: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 18 22:10:00.344: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 18 22:10:00.879: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Mar 18 22:10:00.879: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Mar 18 22:10:00.879: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Mar 18 22:10:00.879: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 18 22:10:00.879: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 18 22:10:00.886: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2f9b6d6a-a258-4663-9f99-bdb293e5c671.15fd857987c32c9f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-878/filler-pod-2f9b6d6a-a258-4663-9f99-bdb293e5c671 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-2f9b6d6a-a258-4663-9f99-bdb293e5c671.15fd857a097ceebc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2f9b6d6a-a258-4663-9f99-bdb293e5c671.15fd857a2db5f54c], Reason = [Created], Message = [Created container filler-pod-2f9b6d6a-a258-4663-9f99-bdb293e5c671] STEP: Considering event: Type = [Normal], Name = [filler-pod-2f9b6d6a-a258-4663-9f99-bdb293e5c671.15fd857a3c71fa4a], Reason = [Started], Message = [Started container filler-pod-2f9b6d6a-a258-4663-9f99-bdb293e5c671] STEP: Considering event: Type = [Normal], Name = [filler-pod-73801bad-d4fc-43e5-b6d6-34442c2aad88.15fd85798632d5d0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-878/filler-pod-73801bad-d4fc-43e5-b6d6-34442c2aad88 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-73801bad-d4fc-43e5-b6d6-34442c2aad88.15fd8579cd9e9d11], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-73801bad-d4fc-43e5-b6d6-34442c2aad88.15fd857a00494123], Reason = [Created], Message = [Created container filler-pod-73801bad-d4fc-43e5-b6d6-34442c2aad88] STEP: Considering event: Type = [Normal], Name = [filler-pod-73801bad-d4fc-43e5-b6d6-34442c2aad88.15fd857a15ec4ed6], Reason = [Started], Message = [Started container filler-pod-73801bad-d4fc-43e5-b6d6-34442c2aad88] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fd857a78a0f502], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:10:06.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-878" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:6.308 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":246,"skipped":3802,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:10:06.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 22:10:06.110: INFO: Waiting up to 5m0s for pod "busybox-user-65534-f0f33040-4505-4753-9bf8-cbdf77fd4e5c" in namespace "security-context-test-9072" to be "success or failure" Mar 18 22:10:06.125: INFO: Pod "busybox-user-65534-f0f33040-4505-4753-9bf8-cbdf77fd4e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.251112ms Mar 18 22:10:08.130: INFO: Pod "busybox-user-65534-f0f33040-4505-4753-9bf8-cbdf77fd4e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020118129s Mar 18 22:10:10.135: INFO: Pod "busybox-user-65534-f0f33040-4505-4753-9bf8-cbdf77fd4e5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024659679s Mar 18 22:10:10.135: INFO: Pod "busybox-user-65534-f0f33040-4505-4753-9bf8-cbdf77fd4e5c" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:10:10.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9072" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":3830,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:10:10.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-348 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 18 22:10:10.233: INFO: Found 0 stateful pods, waiting for 3 Mar 18 22:10:20.238: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:10:20.238: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:10:20.238: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 18 22:10:30.238: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:10:30.238: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:10:30.238: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 18 22:10:30.266: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 18 22:10:40.339: INFO: Updating stateful set ss2 Mar 18 22:10:40.366: INFO: Waiting for Pod statefulset-348/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 18 22:10:50.966: INFO: Found 2 stateful pods, waiting for 3 Mar 18 22:11:00.971: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:11:00.971: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:11:00.971: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 18 22:11:01.012: INFO: Updating stateful set ss2 Mar 18 22:11:01.032: INFO: Waiting for Pod statefulset-348/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 18 22:11:11.057: INFO: Updating stateful set ss2 Mar 18 22:11:11.074: INFO: Waiting for StatefulSet statefulset-348/ss2 to complete update Mar 18 22:11:11.074: INFO: Waiting for Pod statefulset-348/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 18 22:11:22.359: INFO: Waiting for StatefulSet statefulset-348/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 18 22:11:31.082: INFO: Deleting all statefulset in ns statefulset-348 Mar 18 22:11:31.085: INFO: Scaling statefulset ss2 to 0 Mar 18 22:11:51.101: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 22:11:51.105: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:11:51.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-348" for this suite. • [SLOW TEST:100.995 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":248,"skipped":3833,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:11:51.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 18 22:11:59.265: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 22:11:59.297: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 22:12:01.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 22:12:01.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 22:12:03.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 22:12:03.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 22:12:05.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 22:12:05.302: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 22:12:07.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 22:12:07.300: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 22:12:09.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 22:12:09.301: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:12:09.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-252" for this suite. • [SLOW TEST:18.191 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":3853,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:12:09.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:12:16.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7359" for this suite. • [SLOW TEST:7.117 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":250,"skipped":3853,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:12:16.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 22:12:16.975: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 22:12:18.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166336, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166336, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166337, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166336, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 22:12:22.022: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:12:22.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3908" for this suite. STEP: Destroying namespace "webhook-3908-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.676 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":251,"skipped":3870,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:12:22.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 18 22:12:22.191: INFO: Waiting up to 5m0s for pod "pod-b78101b1-883f-4e9c-916c-1e27ba6d9395" in namespace "emptydir-906" to be "success or failure" Mar 18 22:12:22.195: INFO: Pod "pod-b78101b1-883f-4e9c-916c-1e27ba6d9395": Phase="Pending", Reason="", readiness=false. Elapsed: 3.745509ms Mar 18 22:12:24.199: INFO: Pod "pod-b78101b1-883f-4e9c-916c-1e27ba6d9395": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008062731s Mar 18 22:12:26.203: INFO: Pod "pod-b78101b1-883f-4e9c-916c-1e27ba6d9395": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012293204s STEP: Saw pod success Mar 18 22:12:26.203: INFO: Pod "pod-b78101b1-883f-4e9c-916c-1e27ba6d9395" satisfied condition "success or failure" Mar 18 22:12:26.206: INFO: Trying to get logs from node jerma-worker pod pod-b78101b1-883f-4e9c-916c-1e27ba6d9395 container test-container: STEP: delete the pod Mar 18 22:12:26.227: INFO: Waiting for pod pod-b78101b1-883f-4e9c-916c-1e27ba6d9395 to disappear Mar 18 22:12:26.231: INFO: Pod pod-b78101b1-883f-4e9c-916c-1e27ba6d9395 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:12:26.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-906" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":3891,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:12:26.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:12:42.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5672" for this suite. • [SLOW TEST:16.201 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":253,"skipped":3910,"failed":0} SSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:12:42.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 18 22:12:42.515: INFO: Created pod &Pod{ObjectMeta:{dns-9177 dns-9177 /api/v1/namespaces/dns-9177/pods/dns-9177 db41fc14-45a5-4c5a-a2ec-61333b34a78d 866938 0 2020-03-18 22:12:42 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kxfld,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kxfld,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kxfld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 18 22:12:46.539: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9177 PodName:dns-9177 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 22:12:46.539: INFO: >>> kubeConfig: /root/.kube/config I0318 22:12:46.584013 6 log.go:172] (0xc0007c6630) (0xc0017d6640) Create stream I0318 22:12:46.584045 6 log.go:172] (0xc0007c6630) (0xc0017d6640) Stream added, broadcasting: 1 I0318 22:12:46.586675 6 log.go:172] (0xc0007c6630) Reply frame received for 1 I0318 22:12:46.586720 6 log.go:172] (0xc0007c6630) (0xc0017d66e0) Create stream I0318 22:12:46.586821 6 log.go:172] (0xc0007c6630) (0xc0017d66e0) Stream added, broadcasting: 3 I0318 22:12:46.587843 6 log.go:172] (0xc0007c6630) Reply frame received for 3 I0318 22:12:46.587881 6 log.go:172] (0xc0007c6630) (0xc0017d6820) Create stream I0318 22:12:46.587894 6 log.go:172] (0xc0007c6630) (0xc0017d6820) Stream added, broadcasting: 5 I0318 22:12:46.588929 6 log.go:172] (0xc0007c6630) Reply frame received for 5 I0318 22:12:46.682993 6 log.go:172] (0xc0007c6630) Data frame received for 3 I0318 22:12:46.683023 6 log.go:172] (0xc0017d66e0) (3) Data frame handling I0318 22:12:46.683045 6 log.go:172] (0xc0017d66e0) (3) Data frame sent I0318 22:12:46.683630 6 log.go:172] (0xc0007c6630) Data frame received for 3 I0318 22:12:46.683660 6 log.go:172] (0xc0017d66e0) (3) Data frame handling I0318 22:12:46.683795 6 log.go:172] (0xc0007c6630) Data frame received for 5 I0318 22:12:46.683844 6 log.go:172] (0xc0017d6820) (5) Data frame handling I0318 22:12:46.685659 6 log.go:172] (0xc0007c6630) Data frame received for 1 I0318 22:12:46.685696 6 log.go:172] (0xc0017d6640) (1) Data frame handling I0318 22:12:46.685827 6 log.go:172] (0xc0017d6640) (1) Data frame sent I0318 22:12:46.685843 6 log.go:172] (0xc0007c6630) (0xc0017d6640) Stream removed, broadcasting: 1 I0318 22:12:46.685907 6 log.go:172] (0xc0007c6630) (0xc0017d6640) Stream removed, broadcasting: 1 I0318 22:12:46.685927 6 log.go:172] (0xc0007c6630) (0xc0017d66e0) Stream removed, broadcasting: 3 I0318 22:12:46.685986 6 log.go:172] (0xc0007c6630) Go away received I0318 22:12:46.686086 6 log.go:172] (0xc0007c6630) (0xc0017d6820) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 18 22:12:46.686: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9177 PodName:dns-9177 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 22:12:46.686: INFO: >>> kubeConfig: /root/.kube/config I0318 22:12:46.717256 6 log.go:172] (0xc000bbc4d0) (0xc0028b2820) Create stream I0318 22:12:46.717277 6 log.go:172] (0xc000bbc4d0) (0xc0028b2820) Stream added, broadcasting: 1 I0318 22:12:46.721863 6 log.go:172] (0xc000bbc4d0) Reply frame received for 1 I0318 22:12:46.721918 6 log.go:172] (0xc000bbc4d0) (0xc0028b28c0) Create stream I0318 22:12:46.722036 6 log.go:172] (0xc000bbc4d0) (0xc0028b28c0) Stream added, broadcasting: 3 I0318 22:12:46.725875 6 log.go:172] (0xc000bbc4d0) Reply frame received for 3 I0318 22:12:46.725909 6 log.go:172] (0xc000bbc4d0) (0xc0012b9ea0) Create stream I0318 22:12:46.725925 6 log.go:172] (0xc000bbc4d0) (0xc0012b9ea0) Stream added, broadcasting: 5 I0318 22:12:46.726827 6 log.go:172] (0xc000bbc4d0) Reply frame received for 5 I0318 22:12:46.796528 6 log.go:172] (0xc000bbc4d0) Data frame received for 3 I0318 22:12:46.796570 6 log.go:172] (0xc0028b28c0) (3) Data frame handling I0318 22:12:46.796592 6 log.go:172] (0xc0028b28c0) (3) Data frame sent I0318 22:12:46.797405 6 log.go:172] (0xc000bbc4d0) Data frame received for 3 I0318 22:12:46.797438 6 log.go:172] (0xc0028b28c0) (3) Data frame handling I0318 22:12:46.797468 6 log.go:172] (0xc000bbc4d0) Data frame received for 5 I0318 22:12:46.797489 6 log.go:172] (0xc0012b9ea0) (5) Data frame handling I0318 22:12:46.799216 6 log.go:172] (0xc000bbc4d0) Data frame received for 1 I0318 22:12:46.799232 6 log.go:172] (0xc0028b2820) (1) Data frame handling I0318 22:12:46.799244 6 log.go:172] (0xc0028b2820) (1) Data frame sent I0318 22:12:46.799254 6 log.go:172] (0xc000bbc4d0) (0xc0028b2820) Stream removed, broadcasting: 1 I0318 22:12:46.799388 6 log.go:172] (0xc000bbc4d0) (0xc0028b2820) Stream removed, broadcasting: 1 I0318 22:12:46.799420 6 log.go:172] (0xc000bbc4d0) (0xc0028b28c0) Stream removed, broadcasting: 3 I0318 22:12:46.799468 6 log.go:172] (0xc000bbc4d0) Go away received I0318 22:12:46.799534 6 log.go:172] (0xc000bbc4d0) (0xc0012b9ea0) Stream removed, broadcasting: 5 Mar 18 22:12:46.799: INFO: Deleting pod dns-9177... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:12:46.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9177" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":254,"skipped":3915,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:12:46.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 18 22:12:50.947: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6748 PodName:pod-sharedvolume-44afbfae-7c51-4aaf-8ca2-4848da6626f7 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 22:12:50.947: INFO: >>> kubeConfig: /root/.kube/config I0318 22:12:50.982603 6 log.go:172] (0xc000bbcc60) (0xc0028b37c0) Create stream I0318 22:12:50.982633 6 log.go:172] (0xc000bbcc60) (0xc0028b37c0) Stream added, broadcasting: 1 I0318 22:12:50.984371 6 log.go:172] (0xc000bbcc60) Reply frame received for 1 I0318 22:12:50.984412 6 log.go:172] (0xc000bbcc60) (0xc0023cb180) Create stream I0318 22:12:50.984427 6 log.go:172] (0xc000bbcc60) (0xc0023cb180) Stream added, broadcasting: 3 I0318 22:12:50.985483 6 log.go:172] (0xc000bbcc60) Reply frame received for 3 I0318 22:12:50.985519 6 log.go:172] (0xc000bbcc60) (0xc0019ef360) Create stream I0318 22:12:50.985532 6 log.go:172] (0xc000bbcc60) (0xc0019ef360) Stream added, broadcasting: 5 I0318 22:12:50.986424 6 log.go:172] (0xc000bbcc60) Reply frame received for 5 I0318 22:12:51.061436 6 log.go:172] (0xc000bbcc60) Data frame received for 5 I0318 22:12:51.061510 6 log.go:172] (0xc0019ef360) (5) Data frame handling I0318 22:12:51.061549 6 log.go:172] (0xc000bbcc60) Data frame received for 3 I0318 22:12:51.061572 6 log.go:172] (0xc0023cb180) (3) Data frame handling I0318 22:12:51.061604 6 log.go:172] (0xc0023cb180) (3) Data frame sent I0318 22:12:51.061627 6 log.go:172] (0xc000bbcc60) Data frame received for 3 I0318 22:12:51.061648 6 log.go:172] (0xc0023cb180) (3) Data frame handling I0318 22:12:51.063340 6 log.go:172] (0xc000bbcc60) Data frame received for 1 I0318 22:12:51.063382 6 log.go:172] (0xc0028b37c0) (1) Data frame handling I0318 22:12:51.063405 6 log.go:172] (0xc0028b37c0) (1) Data frame sent I0318 22:12:51.063429 6 log.go:172] (0xc000bbcc60) (0xc0028b37c0) Stream removed, broadcasting: 1 I0318 22:12:51.063538 6 log.go:172] (0xc000bbcc60) (0xc0028b37c0) Stream removed, broadcasting: 1 I0318 22:12:51.063555 6 log.go:172] (0xc000bbcc60) (0xc0023cb180) Stream removed, broadcasting: 3 I0318 22:12:51.063567 6 log.go:172] (0xc000bbcc60) (0xc0019ef360) Stream removed, broadcasting: 5 Mar 18 22:12:51.063: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:12:51.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0318 22:12:51.063944 6 log.go:172] (0xc000bbcc60) Go away received STEP: Destroying namespace "emptydir-6748" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":255,"skipped":3917,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:12:51.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-6jk2 STEP: Creating a pod to test atomic-volume-subpath Mar 18 22:12:51.131: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6jk2" in namespace "subpath-8028" to be "success or failure" Mar 18 22:12:51.152: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.174117ms Mar 18 22:12:53.155: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023481921s Mar 18 22:12:55.160: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 4.028083132s Mar 18 22:12:57.164: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 6.032244261s Mar 18 22:12:59.168: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 8.036180467s Mar 18 22:13:01.172: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 10.040614679s Mar 18 22:13:03.176: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 12.044687579s Mar 18 22:13:05.181: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 14.049019508s Mar 18 22:13:07.184: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 16.052777132s Mar 18 22:13:09.189: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 18.057011705s Mar 18 22:13:11.192: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 20.060699404s Mar 18 22:13:13.196: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Running", Reason="", readiness=true. Elapsed: 22.064890542s Mar 18 22:13:15.201: INFO: Pod "pod-subpath-test-downwardapi-6jk2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069005516s STEP: Saw pod success Mar 18 22:13:15.201: INFO: Pod "pod-subpath-test-downwardapi-6jk2" satisfied condition "success or failure" Mar 18 22:13:15.204: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-6jk2 container test-container-subpath-downwardapi-6jk2: STEP: delete the pod Mar 18 22:13:15.263: INFO: Waiting for pod pod-subpath-test-downwardapi-6jk2 to disappear Mar 18 22:13:15.268: INFO: Pod pod-subpath-test-downwardapi-6jk2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-6jk2 Mar 18 22:13:15.268: INFO: Deleting pod "pod-subpath-test-downwardapi-6jk2" in namespace "subpath-8028" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:13:15.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8028" for this suite. • [SLOW TEST:24.203 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":256,"skipped":3999,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:13:15.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 22:13:15.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53d8f75c-2990-4fc0-8be0-50145e6a1c13" in namespace "downward-api-2937" to be "success or failure" Mar 18 22:13:15.358: INFO: Pod "downwardapi-volume-53d8f75c-2990-4fc0-8be0-50145e6a1c13": Phase="Pending", Reason="", readiness=false. Elapsed: 3.635248ms Mar 18 22:13:17.387: INFO: Pod "downwardapi-volume-53d8f75c-2990-4fc0-8be0-50145e6a1c13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032272567s Mar 18 22:13:19.392: INFO: Pod "downwardapi-volume-53d8f75c-2990-4fc0-8be0-50145e6a1c13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03689322s STEP: Saw pod success Mar 18 22:13:19.392: INFO: Pod "downwardapi-volume-53d8f75c-2990-4fc0-8be0-50145e6a1c13" satisfied condition "success or failure" Mar 18 22:13:19.395: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-53d8f75c-2990-4fc0-8be0-50145e6a1c13 container client-container: STEP: delete the pod Mar 18 22:13:19.416: INFO: Waiting for pod downwardapi-volume-53d8f75c-2990-4fc0-8be0-50145e6a1c13 to disappear Mar 18 22:13:19.419: INFO: Pod downwardapi-volume-53d8f75c-2990-4fc0-8be0-50145e6a1c13 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:13:19.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2937" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:13:19.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-39f7eeb8-4ae3-4c9e-a3db-f5136f8d1393 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:13:19.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6643" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":258,"skipped":4056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:13:19.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a71b02c4-ae07-41db-b915-e56935052ff1 STEP: Creating a pod to test consume configMaps Mar 18 22:13:19.587: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d41fc6d-2168-40f2-8830-67faa9812d63" in namespace "configmap-1872" to be "success or failure" Mar 18 22:13:19.604: INFO: Pod "pod-configmaps-5d41fc6d-2168-40f2-8830-67faa9812d63": Phase="Pending", Reason="", readiness=false. Elapsed: 16.778336ms Mar 18 22:13:21.608: INFO: Pod "pod-configmaps-5d41fc6d-2168-40f2-8830-67faa9812d63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020777706s Mar 18 22:13:23.612: INFO: Pod "pod-configmaps-5d41fc6d-2168-40f2-8830-67faa9812d63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024670015s STEP: Saw pod success Mar 18 22:13:23.612: INFO: Pod "pod-configmaps-5d41fc6d-2168-40f2-8830-67faa9812d63" satisfied condition "success or failure" Mar 18 22:13:23.615: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5d41fc6d-2168-40f2-8830-67faa9812d63 container configmap-volume-test: STEP: delete the pod Mar 18 22:13:23.635: INFO: Waiting for pod pod-configmaps-5d41fc6d-2168-40f2-8830-67faa9812d63 to disappear Mar 18 22:13:23.639: INFO: Pod pod-configmaps-5d41fc6d-2168-40f2-8830-67faa9812d63 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:13:23.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1872" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4095,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:13:23.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8063 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8063 I0318 22:13:23.837812 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8063, replica count: 2 I0318 22:13:26.888292 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 22:13:29.888504 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 18 22:13:29.888: INFO: Creating new exec pod Mar 18 22:13:34.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8063 execpodxskrz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 18 22:13:37.628: INFO: stderr: "I0318 22:13:37.536059 3384 log.go:172] (0xc00022b290) (0xc0006b3ea0) Create stream\nI0318 22:13:37.536106 3384 log.go:172] (0xc00022b290) (0xc0006b3ea0) Stream added, broadcasting: 1\nI0318 22:13:37.539114 3384 log.go:172] (0xc00022b290) Reply frame received for 1\nI0318 22:13:37.539162 3384 log.go:172] (0xc00022b290) (0xc000736000) Create stream\nI0318 22:13:37.539276 3384 log.go:172] (0xc00022b290) (0xc000736000) Stream added, broadcasting: 3\nI0318 22:13:37.540158 3384 log.go:172] (0xc00022b290) Reply frame received for 3\nI0318 22:13:37.540195 3384 log.go:172] (0xc00022b290) (0xc0007360a0) Create stream\nI0318 22:13:37.540204 3384 log.go:172] (0xc00022b290) (0xc0007360a0) Stream added, broadcasting: 5\nI0318 22:13:37.540982 3384 log.go:172] (0xc00022b290) Reply frame received for 5\nI0318 22:13:37.621682 3384 log.go:172] (0xc00022b290) Data frame received for 5\nI0318 22:13:37.621708 3384 log.go:172] (0xc0007360a0) (5) Data frame handling\nI0318 22:13:37.621723 3384 log.go:172] (0xc0007360a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0318 22:13:37.622041 3384 log.go:172] (0xc00022b290) Data frame received for 5\nI0318 22:13:37.622069 3384 log.go:172] (0xc0007360a0) (5) Data frame handling\nI0318 22:13:37.622095 3384 log.go:172] (0xc0007360a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0318 22:13:37.622436 3384 log.go:172] (0xc00022b290) Data frame received for 3\nI0318 22:13:37.622456 3384 log.go:172] (0xc000736000) (3) Data frame handling\nI0318 22:13:37.622579 3384 log.go:172] (0xc00022b290) Data frame received for 5\nI0318 22:13:37.622599 3384 log.go:172] (0xc0007360a0) (5) Data frame handling\nI0318 22:13:37.624644 3384 log.go:172] (0xc00022b290) Data frame received for 1\nI0318 22:13:37.624659 3384 log.go:172] (0xc0006b3ea0) (1) Data frame handling\nI0318 22:13:37.624669 3384 log.go:172] (0xc0006b3ea0) (1) Data frame sent\nI0318 22:13:37.624676 3384 log.go:172] (0xc00022b290) (0xc0006b3ea0) Stream removed, broadcasting: 1\nI0318 22:13:37.624908 3384 log.go:172] (0xc00022b290) Go away received\nI0318 22:13:37.624969 3384 log.go:172] (0xc00022b290) (0xc0006b3ea0) Stream removed, broadcasting: 1\nI0318 22:13:37.625000 3384 log.go:172] (0xc00022b290) (0xc000736000) Stream removed, broadcasting: 3\nI0318 22:13:37.625009 3384 log.go:172] (0xc00022b290) (0xc0007360a0) Stream removed, broadcasting: 5\n" Mar 18 22:13:37.628: INFO: stdout: "" Mar 18 22:13:37.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8063 execpodxskrz -- /bin/sh -x -c nc -zv -t -w 2 10.101.222.46 80' Mar 18 22:13:37.851: INFO: stderr: "I0318 22:13:37.777946 3415 log.go:172] (0xc0000f5130) (0xc000a40000) Create stream\nI0318 22:13:37.778002 3415 log.go:172] (0xc0000f5130) (0xc000a40000) Stream added, broadcasting: 1\nI0318 22:13:37.780475 3415 log.go:172] (0xc0000f5130) Reply frame received for 1\nI0318 22:13:37.780522 3415 log.go:172] (0xc0000f5130) (0xc00065ba40) Create stream\nI0318 22:13:37.780537 3415 log.go:172] (0xc0000f5130) (0xc00065ba40) Stream added, broadcasting: 3\nI0318 22:13:37.781854 3415 log.go:172] (0xc0000f5130) Reply frame received for 3\nI0318 22:13:37.781910 3415 log.go:172] (0xc0000f5130) (0xc00020c000) Create stream\nI0318 22:13:37.781935 3415 log.go:172] (0xc0000f5130) (0xc00020c000) Stream added, broadcasting: 5\nI0318 22:13:37.782874 3415 log.go:172] (0xc0000f5130) Reply frame received for 5\nI0318 22:13:37.846571 3415 log.go:172] (0xc0000f5130) Data frame received for 5\nI0318 22:13:37.846627 3415 log.go:172] (0xc00020c000) (5) Data frame handling\nI0318 22:13:37.846642 3415 log.go:172] (0xc00020c000) (5) Data frame sent\n+ nc -zv -t -w 2 10.101.222.46 80\nConnection to 10.101.222.46 80 port [tcp/http] succeeded!\nI0318 22:13:37.846684 3415 log.go:172] (0xc0000f5130) Data frame received for 3\nI0318 22:13:37.846818 3415 log.go:172] (0xc00065ba40) (3) Data frame handling\nI0318 22:13:37.846859 3415 log.go:172] (0xc0000f5130) Data frame received for 5\nI0318 22:13:37.846879 3415 log.go:172] (0xc00020c000) (5) Data frame handling\nI0318 22:13:37.848399 3415 log.go:172] (0xc0000f5130) Data frame received for 1\nI0318 22:13:37.848439 3415 log.go:172] (0xc000a40000) (1) Data frame handling\nI0318 22:13:37.848476 3415 log.go:172] (0xc000a40000) (1) Data frame sent\nI0318 22:13:37.848504 3415 log.go:172] (0xc0000f5130) (0xc000a40000) Stream removed, broadcasting: 1\nI0318 22:13:37.848537 3415 log.go:172] (0xc0000f5130) Go away received\nI0318 22:13:37.848766 3415 log.go:172] (0xc0000f5130) (0xc000a40000) Stream removed, broadcasting: 1\nI0318 22:13:37.848781 3415 log.go:172] (0xc0000f5130) (0xc00065ba40) Stream removed, broadcasting: 3\nI0318 22:13:37.848786 3415 log.go:172] (0xc0000f5130) (0xc00020c000) Stream removed, broadcasting: 5\n" Mar 18 22:13:37.851: INFO: stdout: "" Mar 18 22:13:37.851: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:13:37.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8063" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.302 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":260,"skipped":4128,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:13:37.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0318 22:13:49.299669 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 22:13:49.299: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:13:49.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4865" for this suite. • [SLOW TEST:11.358 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":261,"skipped":4140,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:13:49.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 22:13:49.567: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:13:49.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6470" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":262,"skipped":4145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:13:49.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-07635e69-0c74-4306-adf4-871d73a727b9 in namespace container-probe-5942 Mar 18 22:13:53.851: INFO: Started pod busybox-07635e69-0c74-4306-adf4-871d73a727b9 in namespace container-probe-5942 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 22:13:53.853: INFO: Initial restart count of pod busybox-07635e69-0c74-4306-adf4-871d73a727b9 is 0 Mar 18 22:14:44.941: INFO: Restart count of pod container-probe-5942/busybox-07635e69-0c74-4306-adf4-871d73a727b9 is now 1 (51.087251142s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:14:44.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5942" for this suite. • [SLOW TEST:55.306 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:14:45.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 18 22:14:50.050: INFO: Successfully updated pod "annotationupdatedce68f97-5d6b-486e-b81c-9f85e2419739" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:14:52.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2624" for this suite. • [SLOW TEST:7.052 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4192,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:14:52.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 22:14:52.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98aade01-4d49-476c-b293-280ca5a80548" in namespace "downward-api-3341" to be "success or failure" Mar 18 22:14:52.158: INFO: Pod "downwardapi-volume-98aade01-4d49-476c-b293-280ca5a80548": Phase="Pending", Reason="", readiness=false. Elapsed: 6.809002ms Mar 18 22:14:54.191: INFO: Pod "downwardapi-volume-98aade01-4d49-476c-b293-280ca5a80548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040125124s Mar 18 22:14:56.195: INFO: Pod "downwardapi-volume-98aade01-4d49-476c-b293-280ca5a80548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044297301s STEP: Saw pod success Mar 18 22:14:56.195: INFO: Pod "downwardapi-volume-98aade01-4d49-476c-b293-280ca5a80548" satisfied condition "success or failure" Mar 18 22:14:56.198: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-98aade01-4d49-476c-b293-280ca5a80548 container client-container: STEP: delete the pod Mar 18 22:14:56.249: INFO: Waiting for pod downwardapi-volume-98aade01-4d49-476c-b293-280ca5a80548 to disappear Mar 18 22:14:56.264: INFO: Pod downwardapi-volume-98aade01-4d49-476c-b293-280ca5a80548 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:14:56.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3341" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4200,"failed":0} ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:14:56.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 18 22:15:00.389: INFO: &Pod{ObjectMeta:{send-events-3d49c020-fcf8-4b06-b761-0c2fb3b3af7a events-6190 /api/v1/namespaces/events-6190/pods/send-events-3d49c020-fcf8-4b06-b761-0c2fb3b3af7a d51e9d9f-59a9-4ee9-90a5-ebdbb02c41e8 867863 0 2020-03-18 22:14:56 +0000 UTC map[name:foo time:357715478] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k62jk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k62jk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k62jk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 22:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 22:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-18 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.146,StartTime:2020-03-18 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-18 22:14:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://33730f46930308c617a1d1e6ac292d24d4bda266547e88d60efdf3b21e738183,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 18 22:15:02.393: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 18 22:15:04.398: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:15:04.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6190" for this suite. • [SLOW TEST:8.153 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":266,"skipped":4200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:15:04.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 18 22:15:04.537: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4117 /api/v1/namespaces/watch-4117/configmaps/e2e-watch-test-resource-version 783395ae-1f75-4a17-9efe-52d5743ddd0f 867899 0 2020-03-18 22:15:04 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 22:15:04.537: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4117 /api/v1/namespaces/watch-4117/configmaps/e2e-watch-test-resource-version 783395ae-1f75-4a17-9efe-52d5743ddd0f 867901 0 2020-03-18 22:15:04 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:15:04.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4117" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":267,"skipped":4226,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:15:04.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-385 STEP: creating replication controller nodeport-test in namespace services-385 I0318 22:15:04.674083 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-385, replica count: 2 I0318 22:15:07.724554 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 22:15:10.724774 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 18 22:15:10.724: INFO: Creating new exec pod Mar 18 22:15:15.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-385 execpod6c86w -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 18 22:15:15.954: INFO: stderr: "I0318 22:15:15.869853 3436 log.go:172] (0xc000952dc0) (0xc000a543c0) Create stream\nI0318 22:15:15.869902 3436 log.go:172] (0xc000952dc0) (0xc000a543c0) Stream added, broadcasting: 1\nI0318 22:15:15.871967 3436 log.go:172] (0xc000952dc0) Reply frame received for 1\nI0318 22:15:15.872010 3436 log.go:172] (0xc000952dc0) (0xc000a54460) Create stream\nI0318 22:15:15.872023 3436 log.go:172] (0xc000952dc0) (0xc000a54460) Stream added, broadcasting: 3\nI0318 22:15:15.872992 3436 log.go:172] (0xc000952dc0) Reply frame received for 3\nI0318 22:15:15.873035 3436 log.go:172] (0xc000952dc0) (0xc000a54500) Create stream\nI0318 22:15:15.873048 3436 log.go:172] (0xc000952dc0) (0xc000a54500) Stream added, broadcasting: 5\nI0318 22:15:15.873994 3436 log.go:172] (0xc000952dc0) Reply frame received for 5\nI0318 22:15:15.949429 3436 log.go:172] (0xc000952dc0) Data frame received for 5\nI0318 22:15:15.949459 3436 log.go:172] (0xc000a54500) (5) Data frame handling\nI0318 22:15:15.949477 3436 log.go:172] (0xc000a54500) (5) Data frame sent\nI0318 22:15:15.949485 3436 log.go:172] (0xc000952dc0) Data frame received for 5\nI0318 22:15:15.949491 3436 log.go:172] (0xc000a54500) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0318 22:15:15.949508 3436 log.go:172] (0xc000a54500) (5) Data frame sent\nI0318 22:15:15.949521 3436 log.go:172] (0xc000952dc0) Data frame received for 5\nI0318 22:15:15.949528 3436 log.go:172] (0xc000a54500) (5) Data frame handling\nI0318 22:15:15.949595 3436 log.go:172] (0xc000952dc0) Data frame received for 3\nI0318 22:15:15.949609 3436 log.go:172] (0xc000a54460) (3) Data frame handling\nI0318 22:15:15.950922 3436 log.go:172] (0xc000952dc0) Data frame received for 1\nI0318 22:15:15.950934 3436 log.go:172] (0xc000a543c0) (1) Data frame handling\nI0318 22:15:15.950940 3436 log.go:172] (0xc000a543c0) (1) Data frame sent\nI0318 22:15:15.950948 3436 log.go:172] (0xc000952dc0) (0xc000a543c0) Stream removed, broadcasting: 1\nI0318 22:15:15.950955 3436 log.go:172] (0xc000952dc0) Go away received\nI0318 22:15:15.951290 3436 log.go:172] (0xc000952dc0) (0xc000a543c0) Stream removed, broadcasting: 1\nI0318 22:15:15.951307 3436 log.go:172] (0xc000952dc0) (0xc000a54460) Stream removed, broadcasting: 3\nI0318 22:15:15.951315 3436 log.go:172] (0xc000952dc0) (0xc000a54500) Stream removed, broadcasting: 5\n" Mar 18 22:15:15.954: INFO: stdout: "" Mar 18 22:15:15.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-385 execpod6c86w -- /bin/sh -x -c nc -zv -t -w 2 10.109.86.239 80' Mar 18 22:15:16.151: INFO: stderr: "I0318 22:15:16.089455 3456 log.go:172] (0xc000966580) (0xc0008299a0) Create stream\nI0318 22:15:16.089525 3456 log.go:172] (0xc000966580) (0xc0008299a0) Stream added, broadcasting: 1\nI0318 22:15:16.091332 3456 log.go:172] (0xc000966580) Reply frame received for 1\nI0318 22:15:16.091367 3456 log.go:172] (0xc000966580) (0xc000ac0000) Create stream\nI0318 22:15:16.091378 3456 log.go:172] (0xc000966580) (0xc000ac0000) Stream added, broadcasting: 3\nI0318 22:15:16.092601 3456 log.go:172] (0xc000966580) Reply frame received for 3\nI0318 22:15:16.092632 3456 log.go:172] (0xc000966580) (0xc000ac00a0) Create stream\nI0318 22:15:16.092653 3456 log.go:172] (0xc000966580) (0xc000ac00a0) Stream added, broadcasting: 5\nI0318 22:15:16.093838 3456 log.go:172] (0xc000966580) Reply frame received for 5\nI0318 22:15:16.144321 3456 log.go:172] (0xc000966580) Data frame received for 3\nI0318 22:15:16.144402 3456 log.go:172] (0xc000966580) Data frame received for 5\nI0318 22:15:16.144447 3456 log.go:172] (0xc000ac00a0) (5) Data frame handling\nI0318 22:15:16.144478 3456 log.go:172] (0xc000ac00a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.86.239 80\nConnection to 10.109.86.239 80 port [tcp/http] succeeded!\nI0318 22:15:16.144509 3456 log.go:172] (0xc000ac0000) (3) Data frame handling\nI0318 22:15:16.144650 3456 log.go:172] (0xc000966580) Data frame received for 5\nI0318 22:15:16.144675 3456 log.go:172] (0xc000ac00a0) (5) Data frame handling\nI0318 22:15:16.146615 3456 log.go:172] (0xc000966580) Data frame received for 1\nI0318 22:15:16.146644 3456 log.go:172] (0xc0008299a0) (1) Data frame handling\nI0318 22:15:16.146672 3456 log.go:172] (0xc0008299a0) (1) Data frame sent\nI0318 22:15:16.146697 3456 log.go:172] (0xc000966580) (0xc0008299a0) Stream removed, broadcasting: 1\nI0318 22:15:16.146846 3456 log.go:172] (0xc000966580) Go away received\nI0318 22:15:16.147033 3456 log.go:172] (0xc000966580) (0xc0008299a0) Stream removed, broadcasting: 1\nI0318 22:15:16.147062 3456 log.go:172] (0xc000966580) (0xc000ac0000) Stream removed, broadcasting: 3\nI0318 22:15:16.147077 3456 log.go:172] (0xc000966580) (0xc000ac00a0) Stream removed, broadcasting: 5\n" Mar 18 22:15:16.151: INFO: stdout: "" Mar 18 22:15:16.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-385 execpod6c86w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32012' Mar 18 22:15:16.344: INFO: stderr: "I0318 22:15:16.279677 3478 log.go:172] (0xc000b509a0) (0xc000a00000) Create stream\nI0318 22:15:16.279730 3478 log.go:172] (0xc000b509a0) (0xc000a00000) Stream added, broadcasting: 1\nI0318 22:15:16.282493 3478 log.go:172] (0xc000b509a0) Reply frame received for 1\nI0318 22:15:16.282541 3478 log.go:172] (0xc000b509a0) (0xc000683a40) Create stream\nI0318 22:15:16.282555 3478 log.go:172] (0xc000b509a0) (0xc000683a40) Stream added, broadcasting: 3\nI0318 22:15:16.283398 3478 log.go:172] (0xc000b509a0) Reply frame received for 3\nI0318 22:15:16.283454 3478 log.go:172] (0xc000b509a0) (0xc0005cc000) Create stream\nI0318 22:15:16.283471 3478 log.go:172] (0xc000b509a0) (0xc0005cc000) Stream added, broadcasting: 5\nI0318 22:15:16.284338 3478 log.go:172] (0xc000b509a0) Reply frame received for 5\nI0318 22:15:16.336391 3478 log.go:172] (0xc000b509a0) Data frame received for 3\nI0318 22:15:16.336434 3478 log.go:172] (0xc000683a40) (3) Data frame handling\nI0318 22:15:16.336477 3478 log.go:172] (0xc000b509a0) Data frame received for 5\nI0318 22:15:16.336493 3478 log.go:172] (0xc0005cc000) (5) Data frame handling\nI0318 22:15:16.336510 3478 log.go:172] (0xc0005cc000) (5) Data frame sent\nI0318 22:15:16.336545 3478 log.go:172] (0xc000b509a0) Data frame received for 5\nI0318 22:15:16.336560 3478 log.go:172] (0xc0005cc000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32012\nConnection to 172.17.0.10 32012 port [tcp/32012] succeeded!\nI0318 22:15:16.338467 3478 log.go:172] (0xc000b509a0) Data frame received for 1\nI0318 22:15:16.338496 3478 log.go:172] (0xc000a00000) (1) Data frame handling\nI0318 22:15:16.338516 3478 log.go:172] (0xc000a00000) (1) Data frame sent\nI0318 22:15:16.338537 3478 log.go:172] (0xc000b509a0) (0xc000a00000) Stream removed, broadcasting: 1\nI0318 22:15:16.339003 3478 log.go:172] (0xc000b509a0) (0xc000a00000) Stream removed, broadcasting: 1\nI0318 22:15:16.339032 3478 log.go:172] (0xc000b509a0) (0xc000683a40) Stream removed, broadcasting: 3\nI0318 22:15:16.339235 3478 log.go:172] (0xc000b509a0) (0xc0005cc000) Stream removed, broadcasting: 5\n" Mar 18 22:15:16.344: INFO: stdout: "" Mar 18 22:15:16.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-385 execpod6c86w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32012' Mar 18 22:15:16.530: INFO: stderr: "I0318 22:15:16.465575 3500 log.go:172] (0xc00002cd10) (0xc0006bba40) Create stream\nI0318 22:15:16.465634 3500 log.go:172] (0xc00002cd10) (0xc0006bba40) Stream added, broadcasting: 1\nI0318 22:15:16.468165 3500 log.go:172] (0xc00002cd10) Reply frame received for 1\nI0318 22:15:16.468222 3500 log.go:172] (0xc00002cd10) (0xc0007620a0) Create stream\nI0318 22:15:16.468237 3500 log.go:172] (0xc00002cd10) (0xc0007620a0) Stream added, broadcasting: 3\nI0318 22:15:16.469335 3500 log.go:172] (0xc00002cd10) Reply frame received for 3\nI0318 22:15:16.469370 3500 log.go:172] (0xc00002cd10) (0xc0006bbc20) Create stream\nI0318 22:15:16.469379 3500 log.go:172] (0xc00002cd10) (0xc0006bbc20) Stream added, broadcasting: 5\nI0318 22:15:16.470295 3500 log.go:172] (0xc00002cd10) Reply frame received for 5\nI0318 22:15:16.523742 3500 log.go:172] (0xc00002cd10) Data frame received for 3\nI0318 22:15:16.523787 3500 log.go:172] (0xc0007620a0) (3) Data frame handling\nI0318 22:15:16.523850 3500 log.go:172] (0xc00002cd10) Data frame received for 5\nI0318 22:15:16.523912 3500 log.go:172] (0xc0006bbc20) (5) Data frame handling\nI0318 22:15:16.523934 3500 log.go:172] (0xc0006bbc20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32012\nConnection to 172.17.0.8 32012 port [tcp/32012] succeeded!\nI0318 22:15:16.524156 3500 log.go:172] (0xc00002cd10) Data frame received for 5\nI0318 22:15:16.524190 3500 log.go:172] (0xc0006bbc20) (5) Data frame handling\nI0318 22:15:16.526110 3500 log.go:172] (0xc00002cd10) Data frame received for 1\nI0318 22:15:16.526143 3500 log.go:172] (0xc0006bba40) (1) Data frame handling\nI0318 22:15:16.526175 3500 log.go:172] (0xc0006bba40) (1) Data frame sent\nI0318 22:15:16.526208 3500 log.go:172] (0xc00002cd10) (0xc0006bba40) Stream removed, broadcasting: 1\nI0318 22:15:16.526267 3500 log.go:172] (0xc00002cd10) Go away received\nI0318 22:15:16.526644 3500 log.go:172] (0xc00002cd10) (0xc0006bba40) Stream removed, broadcasting: 1\nI0318 22:15:16.526667 3500 log.go:172] (0xc00002cd10) (0xc0007620a0) Stream removed, broadcasting: 3\nI0318 22:15:16.526679 3500 log.go:172] (0xc00002cd10) (0xc0006bbc20) Stream removed, broadcasting: 5\n" Mar 18 22:15:16.530: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:15:16.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-385" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.994 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":268,"skipped":4247,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:15:16.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-286 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-286 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-286 Mar 18 22:15:16.609: INFO: Found 0 stateful pods, waiting for 1 Mar 18 22:15:26.613: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 18 22:15:26.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-286 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 22:15:26.879: INFO: stderr: "I0318 22:15:26.755992 3522 log.go:172] (0xc00011b3f0) (0xc000665ae0) Create stream\nI0318 22:15:26.756053 3522 log.go:172] (0xc00011b3f0) (0xc000665ae0) Stream added, broadcasting: 1\nI0318 22:15:26.760129 3522 log.go:172] (0xc00011b3f0) Reply frame received for 1\nI0318 22:15:26.760170 3522 log.go:172] (0xc00011b3f0) (0xc000665cc0) Create stream\nI0318 22:15:26.760180 3522 log.go:172] (0xc00011b3f0) (0xc000665cc0) Stream added, broadcasting: 3\nI0318 22:15:26.761833 3522 log.go:172] (0xc00011b3f0) Reply frame received for 3\nI0318 22:15:26.761874 3522 log.go:172] (0xc00011b3f0) (0xc000906000) Create stream\nI0318 22:15:26.761889 3522 log.go:172] (0xc00011b3f0) (0xc000906000) Stream added, broadcasting: 5\nI0318 22:15:26.763574 3522 log.go:172] (0xc00011b3f0) Reply frame received for 5\nI0318 22:15:26.834105 3522 log.go:172] (0xc00011b3f0) Data frame received for 5\nI0318 22:15:26.834145 3522 log.go:172] (0xc000906000) (5) Data frame handling\nI0318 22:15:26.834172 3522 log.go:172] (0xc000906000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 22:15:26.873481 3522 log.go:172] (0xc00011b3f0) Data frame received for 3\nI0318 22:15:26.873510 3522 log.go:172] (0xc000665cc0) (3) Data frame handling\nI0318 22:15:26.873538 3522 log.go:172] (0xc000665cc0) (3) Data frame sent\nI0318 22:15:26.873879 3522 log.go:172] (0xc00011b3f0) Data frame received for 3\nI0318 22:15:26.873918 3522 log.go:172] (0xc000665cc0) (3) Data frame handling\nI0318 22:15:26.874089 3522 log.go:172] (0xc00011b3f0) Data frame received for 5\nI0318 22:15:26.874123 3522 log.go:172] (0xc000906000) (5) Data frame handling\nI0318 22:15:26.875835 3522 log.go:172] (0xc00011b3f0) Data frame received for 1\nI0318 22:15:26.875851 3522 log.go:172] (0xc000665ae0) (1) Data frame handling\nI0318 22:15:26.875858 3522 log.go:172] (0xc000665ae0) (1) Data frame sent\nI0318 22:15:26.875957 3522 log.go:172] (0xc00011b3f0) (0xc000665ae0) Stream removed, broadcasting: 1\nI0318 22:15:26.876160 3522 log.go:172] (0xc00011b3f0) Go away received\nI0318 22:15:26.876236 3522 log.go:172] (0xc00011b3f0) (0xc000665ae0) Stream removed, broadcasting: 1\nI0318 22:15:26.876250 3522 log.go:172] (0xc00011b3f0) (0xc000665cc0) Stream removed, broadcasting: 3\nI0318 22:15:26.876257 3522 log.go:172] (0xc00011b3f0) (0xc000906000) Stream removed, broadcasting: 5\n" Mar 18 22:15:26.879: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 22:15:26.879: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 18 22:15:26.910: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 18 22:15:36.915: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 22:15:36.915: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 22:15:36.931: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998944s Mar 18 22:15:37.935: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994585999s Mar 18 22:15:38.940: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989743809s Mar 18 22:15:39.944: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985084522s Mar 18 22:15:40.949: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980956547s Mar 18 22:15:41.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97623576s Mar 18 22:15:42.957: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.97255623s Mar 18 22:15:43.961: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.967751036s Mar 18 22:15:44.966: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.963753755s Mar 18 22:15:45.970: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.476131ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-286 Mar 18 22:15:46.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-286 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 18 22:15:47.186: INFO: stderr: "I0318 22:15:47.090049 3545 log.go:172] (0xc000a35760) (0xc000a74780) Create stream\nI0318 22:15:47.090090 3545 log.go:172] (0xc000a35760) (0xc000a74780) Stream added, broadcasting: 1\nI0318 22:15:47.095512 3545 log.go:172] (0xc000a35760) Reply frame received for 1\nI0318 22:15:47.095554 3545 log.go:172] (0xc000a35760) (0xc0005fc640) Create stream\nI0318 22:15:47.095566 3545 log.go:172] (0xc000a35760) (0xc0005fc640) Stream added, broadcasting: 3\nI0318 22:15:47.096728 3545 log.go:172] (0xc000a35760) Reply frame received for 3\nI0318 22:15:47.096767 3545 log.go:172] (0xc000a35760) (0xc00073f400) Create stream\nI0318 22:15:47.096778 3545 log.go:172] (0xc000a35760) (0xc00073f400) Stream added, broadcasting: 5\nI0318 22:15:47.097953 3545 log.go:172] (0xc000a35760) Reply frame received for 5\nI0318 22:15:47.179093 3545 log.go:172] (0xc000a35760) Data frame received for 5\nI0318 22:15:47.179157 3545 log.go:172] (0xc00073f400) (5) Data frame handling\nI0318 22:15:47.179182 3545 log.go:172] (0xc00073f400) (5) Data frame sent\nI0318 22:15:47.179198 3545 log.go:172] (0xc000a35760) Data frame received for 5\nI0318 22:15:47.179226 3545 log.go:172] (0xc00073f400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0318 22:15:47.179272 3545 log.go:172] (0xc000a35760) Data frame received for 3\nI0318 22:15:47.179323 3545 log.go:172] (0xc0005fc640) (3) Data frame handling\nI0318 22:15:47.179343 3545 log.go:172] (0xc0005fc640) (3) Data frame sent\nI0318 22:15:47.179367 3545 log.go:172] (0xc000a35760) Data frame received for 3\nI0318 22:15:47.179384 3545 log.go:172] (0xc0005fc640) (3) Data frame handling\nI0318 22:15:47.181593 3545 log.go:172] (0xc000a35760) Data frame received for 1\nI0318 22:15:47.181617 3545 log.go:172] (0xc000a74780) (1) Data frame handling\nI0318 22:15:47.181635 3545 log.go:172] (0xc000a74780) (1) Data frame sent\nI0318 22:15:47.181648 3545 log.go:172] (0xc000a35760) (0xc000a74780) Stream removed, broadcasting: 1\nI0318 22:15:47.181668 3545 log.go:172] (0xc000a35760) Go away received\nI0318 22:15:47.182201 3545 log.go:172] (0xc000a35760) (0xc000a74780) Stream removed, broadcasting: 1\nI0318 22:15:47.182224 3545 log.go:172] (0xc000a35760) (0xc0005fc640) Stream removed, broadcasting: 3\nI0318 22:15:47.182236 3545 log.go:172] (0xc000a35760) (0xc00073f400) Stream removed, broadcasting: 5\n" Mar 18 22:15:47.186: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 18 22:15:47.186: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 18 22:15:47.190: INFO: Found 1 stateful pods, waiting for 3 Mar 18 22:15:57.194: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:15:57.194: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 22:15:57.195: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 18 22:15:57.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-286 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 22:15:57.397: INFO: stderr: "I0318 22:15:57.332024 3568 log.go:172] (0xc00050eb00) (0xc0006f1cc0) Create stream\nI0318 22:15:57.332109 3568 log.go:172] (0xc00050eb00) (0xc0006f1cc0) Stream added, broadcasting: 1\nI0318 22:15:57.334734 3568 log.go:172] (0xc00050eb00) Reply frame received for 1\nI0318 22:15:57.334777 3568 log.go:172] (0xc00050eb00) (0xc000753400) Create stream\nI0318 22:15:57.334799 3568 log.go:172] (0xc00050eb00) (0xc000753400) Stream added, broadcasting: 3\nI0318 22:15:57.335517 3568 log.go:172] (0xc00050eb00) Reply frame received for 3\nI0318 22:15:57.335544 3568 log.go:172] (0xc00050eb00) (0xc0006f1d60) Create stream\nI0318 22:15:57.335557 3568 log.go:172] (0xc00050eb00) (0xc0006f1d60) Stream added, broadcasting: 5\nI0318 22:15:57.336405 3568 log.go:172] (0xc00050eb00) Reply frame received for 5\nI0318 22:15:57.390954 3568 log.go:172] (0xc00050eb00) Data frame received for 3\nI0318 22:15:57.391022 3568 log.go:172] (0xc000753400) (3) Data frame handling\nI0318 22:15:57.391049 3568 log.go:172] (0xc000753400) (3) Data frame sent\nI0318 22:15:57.391109 3568 log.go:172] (0xc00050eb00) Data frame received for 5\nI0318 22:15:57.391138 3568 log.go:172] (0xc0006f1d60) (5) Data frame handling\nI0318 22:15:57.391150 3568 log.go:172] (0xc0006f1d60) (5) Data frame sent\nI0318 22:15:57.391158 3568 log.go:172] (0xc00050eb00) Data frame received for 5\nI0318 22:15:57.391167 3568 log.go:172] (0xc0006f1d60) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 22:15:57.391181 3568 log.go:172] (0xc00050eb00) Data frame received for 3\nI0318 22:15:57.391189 3568 log.go:172] (0xc000753400) (3) Data frame handling\nI0318 22:15:57.392827 3568 log.go:172] (0xc00050eb00) Data frame received for 1\nI0318 22:15:57.392847 3568 log.go:172] (0xc0006f1cc0) (1) Data frame handling\nI0318 22:15:57.392869 3568 log.go:172] (0xc0006f1cc0) (1) Data frame sent\nI0318 22:15:57.392882 3568 log.go:172] (0xc00050eb00) (0xc0006f1cc0) Stream removed, broadcasting: 1\nI0318 22:15:57.392897 3568 log.go:172] (0xc00050eb00) Go away received\nI0318 22:15:57.393465 3568 log.go:172] (0xc00050eb00) (0xc0006f1cc0) Stream removed, broadcasting: 1\nI0318 22:15:57.393494 3568 log.go:172] (0xc00050eb00) (0xc000753400) Stream removed, broadcasting: 3\nI0318 22:15:57.393508 3568 log.go:172] (0xc00050eb00) (0xc0006f1d60) Stream removed, broadcasting: 5\n" Mar 18 22:15:57.397: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 22:15:57.397: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 18 22:15:57.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-286 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 22:15:57.637: INFO: stderr: "I0318 22:15:57.526444 3590 log.go:172] (0xc000af82c0) (0xc00045e0a0) Create stream\nI0318 22:15:57.526495 3590 log.go:172] (0xc000af82c0) (0xc00045e0a0) Stream added, broadcasting: 1\nI0318 22:15:57.529079 3590 log.go:172] (0xc000af82c0) Reply frame received for 1\nI0318 22:15:57.529223 3590 log.go:172] (0xc000af82c0) (0xc00045e140) Create stream\nI0318 22:15:57.529239 3590 log.go:172] (0xc000af82c0) (0xc00045e140) Stream added, broadcasting: 3\nI0318 22:15:57.530337 3590 log.go:172] (0xc000af82c0) Reply frame received for 3\nI0318 22:15:57.530377 3590 log.go:172] (0xc000af82c0) (0xc000788000) Create stream\nI0318 22:15:57.530394 3590 log.go:172] (0xc000af82c0) (0xc000788000) Stream added, broadcasting: 5\nI0318 22:15:57.531467 3590 log.go:172] (0xc000af82c0) Reply frame received for 5\nI0318 22:15:57.595812 3590 log.go:172] (0xc000af82c0) Data frame received for 5\nI0318 22:15:57.595840 3590 log.go:172] (0xc000788000) (5) Data frame handling\nI0318 22:15:57.595859 3590 log.go:172] (0xc000788000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 22:15:57.630558 3590 log.go:172] (0xc000af82c0) Data frame received for 5\nI0318 22:15:57.630597 3590 log.go:172] (0xc000788000) (5) Data frame handling\nI0318 22:15:57.630636 3590 log.go:172] (0xc000af82c0) Data frame received for 3\nI0318 22:15:57.630666 3590 log.go:172] (0xc00045e140) (3) Data frame handling\nI0318 22:15:57.630698 3590 log.go:172] (0xc00045e140) (3) Data frame sent\nI0318 22:15:57.630981 3590 log.go:172] (0xc000af82c0) Data frame received for 3\nI0318 22:15:57.631018 3590 log.go:172] (0xc00045e140) (3) Data frame handling\nI0318 22:15:57.632477 3590 log.go:172] (0xc000af82c0) Data frame received for 1\nI0318 22:15:57.632497 3590 log.go:172] (0xc00045e0a0) (1) Data frame handling\nI0318 22:15:57.632509 3590 log.go:172] (0xc00045e0a0) (1) Data frame sent\nI0318 22:15:57.632744 3590 log.go:172] (0xc000af82c0) (0xc00045e0a0) Stream removed, broadcasting: 1\nI0318 22:15:57.633263 3590 log.go:172] (0xc000af82c0) Go away received\nI0318 22:15:57.633307 3590 log.go:172] (0xc000af82c0) (0xc00045e0a0) Stream removed, broadcasting: 1\nI0318 22:15:57.633354 3590 log.go:172] (0xc000af82c0) (0xc00045e140) Stream removed, broadcasting: 3\nI0318 22:15:57.633374 3590 log.go:172] (0xc000af82c0) (0xc000788000) Stream removed, broadcasting: 5\n" Mar 18 22:15:57.637: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 22:15:57.637: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 18 22:15:57.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-286 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 18 22:15:57.878: INFO: stderr: "I0318 22:15:57.763462 3609 log.go:172] (0xc000aee4d0) (0xc000abe140) Create stream\nI0318 22:15:57.763526 3609 log.go:172] (0xc000aee4d0) (0xc000abe140) Stream added, broadcasting: 1\nI0318 22:15:57.769733 3609 log.go:172] (0xc000aee4d0) Reply frame received for 1\nI0318 22:15:57.769793 3609 log.go:172] (0xc000aee4d0) (0xc000a2e280) Create stream\nI0318 22:15:57.769808 3609 log.go:172] (0xc000aee4d0) (0xc000a2e280) Stream added, broadcasting: 3\nI0318 22:15:57.772056 3609 log.go:172] (0xc000aee4d0) Reply frame received for 3\nI0318 22:15:57.772092 3609 log.go:172] (0xc000aee4d0) (0xc000abe1e0) Create stream\nI0318 22:15:57.772103 3609 log.go:172] (0xc000aee4d0) (0xc000abe1e0) Stream added, broadcasting: 5\nI0318 22:15:57.772958 3609 log.go:172] (0xc000aee4d0) Reply frame received for 5\nI0318 22:15:57.833101 3609 log.go:172] (0xc000aee4d0) Data frame received for 5\nI0318 22:15:57.833279 3609 log.go:172] (0xc000abe1e0) (5) Data frame handling\nI0318 22:15:57.833296 3609 log.go:172] (0xc000abe1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0318 22:15:57.870656 3609 log.go:172] (0xc000aee4d0) Data frame received for 3\nI0318 22:15:57.870694 3609 log.go:172] (0xc000a2e280) (3) Data frame handling\nI0318 22:15:57.870716 3609 log.go:172] (0xc000a2e280) (3) Data frame sent\nI0318 22:15:57.870979 3609 log.go:172] (0xc000aee4d0) Data frame received for 5\nI0318 22:15:57.871068 3609 log.go:172] (0xc000abe1e0) (5) Data frame handling\nI0318 22:15:57.871093 3609 log.go:172] (0xc000aee4d0) Data frame received for 3\nI0318 22:15:57.871101 3609 log.go:172] (0xc000a2e280) (3) Data frame handling\nI0318 22:15:57.872874 3609 log.go:172] (0xc000aee4d0) Data frame received for 1\nI0318 22:15:57.872889 3609 log.go:172] (0xc000abe140) (1) Data frame handling\nI0318 22:15:57.872895 3609 log.go:172] (0xc000abe140) (1) Data frame sent\nI0318 22:15:57.872903 3609 log.go:172] (0xc000aee4d0) (0xc000abe140) Stream removed, broadcasting: 1\nI0318 22:15:57.872911 3609 log.go:172] (0xc000aee4d0) Go away received\nI0318 22:15:57.873968 3609 log.go:172] (0xc000aee4d0) (0xc000abe140) Stream removed, broadcasting: 1\nI0318 22:15:57.874006 3609 log.go:172] (0xc000aee4d0) (0xc000a2e280) Stream removed, broadcasting: 3\nI0318 22:15:57.874026 3609 log.go:172] (0xc000aee4d0) (0xc000abe1e0) Stream removed, broadcasting: 5\n" Mar 18 22:15:57.878: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 18 22:15:57.878: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 18 22:15:57.878: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 22:15:57.882: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Mar 18 22:16:07.890: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 22:16:07.890: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 18 22:16:07.890: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 18 22:16:07.911: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999342s Mar 18 22:16:08.915: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985831246s Mar 18 22:16:09.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981596654s Mar 18 22:16:10.924: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.976858271s Mar 18 22:16:11.929: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.972522578s Mar 18 22:16:12.935: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.967051622s Mar 18 22:16:13.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.961865107s Mar 18 22:16:14.944: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.957231966s Mar 18 22:16:15.948: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.952497982s Mar 18 22:16:16.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 947.971986ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-286 Mar 18 22:16:17.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-286 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 18 22:16:18.140: INFO: stderr: "I0318 22:16:18.087572 3630 log.go:172] (0xc000a146e0) (0xc0009cc000) Create stream\nI0318 22:16:18.087648 3630 log.go:172] (0xc000a146e0) (0xc0009cc000) Stream added, broadcasting: 1\nI0318 22:16:18.091687 3630 log.go:172] (0xc000a146e0) Reply frame received for 1\nI0318 22:16:18.091718 3630 log.go:172] (0xc000a146e0) (0xc0009cc0a0) Create stream\nI0318 22:16:18.091726 3630 log.go:172] (0xc000a146e0) (0xc0009cc0a0) Stream added, broadcasting: 3\nI0318 22:16:18.092699 3630 log.go:172] (0xc000a146e0) Reply frame received for 3\nI0318 22:16:18.092755 3630 log.go:172] (0xc000a146e0) (0xc00066fa40) Create stream\nI0318 22:16:18.092783 3630 log.go:172] (0xc000a146e0) (0xc00066fa40) Stream added, broadcasting: 5\nI0318 22:16:18.093733 3630 log.go:172] (0xc000a146e0) Reply frame received for 5\nI0318 22:16:18.133314 3630 log.go:172] (0xc000a146e0) Data frame received for 5\nI0318 22:16:18.133354 3630 log.go:172] (0xc00066fa40) (5) Data frame handling\nI0318 22:16:18.133366 3630 log.go:172] (0xc00066fa40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0318 22:16:18.133377 3630 log.go:172] (0xc000a146e0) Data frame received for 3\nI0318 22:16:18.133382 3630 log.go:172] (0xc0009cc0a0) (3) Data frame handling\nI0318 22:16:18.133394 3630 log.go:172] (0xc0009cc0a0) (3) Data frame sent\nI0318 22:16:18.133402 3630 log.go:172] (0xc000a146e0) Data frame received for 3\nI0318 22:16:18.133408 3630 log.go:172] (0xc0009cc0a0) (3) Data frame handling\nI0318 22:16:18.133554 3630 log.go:172] (0xc000a146e0) Data frame received for 5\nI0318 22:16:18.133574 3630 log.go:172] (0xc00066fa40) (5) Data frame handling\nI0318 22:16:18.134740 3630 log.go:172] (0xc000a146e0) Data frame received for 1\nI0318 22:16:18.134761 3630 log.go:172] (0xc0009cc000) (1) Data frame handling\nI0318 22:16:18.134771 3630 log.go:172] (0xc0009cc000) (1) Data frame sent\nI0318 22:16:18.134781 3630 log.go:172] (0xc000a146e0) (0xc0009cc000) Stream removed, broadcasting: 1\nI0318 22:16:18.134806 3630 log.go:172] (0xc000a146e0) Go away received\nI0318 22:16:18.135050 3630 log.go:172] (0xc000a146e0) (0xc0009cc000) Stream removed, broadcasting: 1\nI0318 22:16:18.135069 3630 log.go:172] (0xc000a146e0) (0xc0009cc0a0) Stream removed, broadcasting: 3\nI0318 22:16:18.135080 3630 log.go:172] (0xc000a146e0) (0xc00066fa40) Stream removed, broadcasting: 5\n" Mar 18 22:16:18.140: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 18 22:16:18.140: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 18 22:16:18.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-286 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 18 22:16:18.361: INFO: stderr: "I0318 22:16:18.276165 3652 log.go:172] (0xc0009b0000) (0xc000759540) Create stream\nI0318 22:16:18.276232 3652 log.go:172] (0xc0009b0000) (0xc000759540) Stream added, broadcasting: 1\nI0318 22:16:18.279565 3652 log.go:172] (0xc0009b0000) Reply frame received for 1\nI0318 22:16:18.279616 3652 log.go:172] (0xc0009b0000) (0xc0009c4000) Create stream\nI0318 22:16:18.279632 3652 log.go:172] (0xc0009b0000) (0xc0009c4000) Stream added, broadcasting: 3\nI0318 22:16:18.280597 3652 log.go:172] (0xc0009b0000) Reply frame received for 3\nI0318 22:16:18.280641 3652 log.go:172] (0xc0009b0000) (0xc000a5c000) Create stream\nI0318 22:16:18.280663 3652 log.go:172] (0xc0009b0000) (0xc000a5c000) Stream added, broadcasting: 5\nI0318 22:16:18.281968 3652 log.go:172] (0xc0009b0000) Reply frame received for 5\nI0318 22:16:18.355329 3652 log.go:172] (0xc0009b0000) Data frame received for 3\nI0318 22:16:18.355372 3652 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0318 22:16:18.355410 3652 log.go:172] (0xc0009c4000) (3) Data frame sent\nI0318 22:16:18.355435 3652 log.go:172] (0xc0009b0000) Data frame received for 3\nI0318 22:16:18.355457 3652 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0318 22:16:18.355723 3652 log.go:172] (0xc0009b0000) Data frame received for 5\nI0318 22:16:18.355752 3652 log.go:172] (0xc000a5c000) (5) Data frame handling\nI0318 22:16:18.355773 3652 log.go:172] (0xc000a5c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0318 22:16:18.356014 3652 log.go:172] (0xc0009b0000) Data frame received for 5\nI0318 22:16:18.356042 3652 log.go:172] (0xc000a5c000) (5) Data frame handling\nI0318 22:16:18.357749 3652 log.go:172] (0xc0009b0000) Data frame received for 1\nI0318 22:16:18.357786 3652 log.go:172] (0xc000759540) (1) Data frame handling\nI0318 22:16:18.357813 3652 log.go:172] (0xc000759540) (1) Data frame sent\nI0318 22:16:18.357836 3652 log.go:172] (0xc0009b0000) (0xc000759540) Stream removed, broadcasting: 1\nI0318 22:16:18.357972 3652 log.go:172] (0xc0009b0000) Go away received\nI0318 22:16:18.358308 3652 log.go:172] (0xc0009b0000) (0xc000759540) Stream removed, broadcasting: 1\nI0318 22:16:18.358330 3652 log.go:172] (0xc0009b0000) (0xc0009c4000) Stream removed, broadcasting: 3\nI0318 22:16:18.358342 3652 log.go:172] (0xc0009b0000) (0xc000a5c000) Stream removed, broadcasting: 5\n" Mar 18 22:16:18.361: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 18 22:16:18.361: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 18 22:16:18.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 18 22:16:18.558: INFO: stderr: "I0318 22:16:18.488637 3673 log.go:172] (0xc0000f5290) (0xc0007635e0) Create stream\nI0318 22:16:18.488710 3673 log.go:172] (0xc0000f5290) (0xc0007635e0) Stream added, broadcasting: 1\nI0318 22:16:18.491240 3673 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0318 22:16:18.491291 3673 log.go:172] (0xc0000f5290) (0xc0009a6000) Create stream\nI0318 22:16:18.491305 3673 log.go:172] (0xc0000f5290) (0xc0009a6000) Stream added, broadcasting: 3\nI0318 22:16:18.492213 3673 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0318 22:16:18.492272 3673 log.go:172] (0xc0000f5290) (0xc0009a60a0) Create stream\nI0318 22:16:18.492290 3673 log.go:172] (0xc0000f5290) (0xc0009a60a0) Stream added, broadcasting: 5\nI0318 22:16:18.493774 3673 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0318 22:16:18.551685 3673 log.go:172] (0xc0000f5290) Data frame received for 5\nI0318 22:16:18.551716 3673 log.go:172] (0xc0009a60a0) (5) Data frame handling\nI0318 22:16:18.551732 3673 log.go:172] (0xc0009a60a0) (5) Data frame sent\nI0318 22:16:18.551743 3673 log.go:172] (0xc0000f5290) Data frame received for 5\nI0318 22:16:18.551753 3673 log.go:172] (0xc0009a60a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0318 22:16:18.551837 3673 log.go:172] (0xc0000f5290) Data frame received for 3\nI0318 22:16:18.551867 3673 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0318 22:16:18.551895 3673 log.go:172] (0xc0009a6000) (3) Data frame sent\nI0318 22:16:18.551941 3673 log.go:172] (0xc0000f5290) Data frame received for 3\nI0318 22:16:18.551957 3673 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0318 22:16:18.554215 3673 log.go:172] (0xc0000f5290) Data frame received for 1\nI0318 22:16:18.554252 3673 log.go:172] (0xc0007635e0) (1) Data frame handling\nI0318 22:16:18.554287 3673 log.go:172] (0xc0007635e0) (1) Data frame sent\nI0318 22:16:18.554313 3673 log.go:172] (0xc0000f5290) (0xc0007635e0) Stream removed, broadcasting: 1\nI0318 22:16:18.554406 3673 log.go:172] (0xc0000f5290) Go away received\nI0318 22:16:18.554772 3673 log.go:172] (0xc0000f5290) (0xc0007635e0) Stream removed, broadcasting: 1\nI0318 22:16:18.554797 3673 log.go:172] (0xc0000f5290) (0xc0009a6000) Stream removed, broadcasting: 3\nI0318 22:16:18.554810 3673 log.go:172] (0xc0000f5290) (0xc0009a60a0) Stream removed, broadcasting: 5\n" Mar 18 22:16:18.559: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 18 22:16:18.559: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 18 22:16:18.559: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 18 22:16:38.574: INFO: Deleting all statefulset in ns statefulset-286 Mar 18 22:16:38.578: INFO: Scaling statefulset ss to 0 Mar 18 22:16:38.586: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 22:16:38.588: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:16:38.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-286" for this suite. • [SLOW TEST:82.089 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":269,"skipped":4258,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:16:38.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 18 22:16:38.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e94f67b5-8e71-4ed3-831c-d63fd60a3d49" in namespace "projected-6746" to be "success or failure" Mar 18 22:16:38.686: INFO: Pod "downwardapi-volume-e94f67b5-8e71-4ed3-831c-d63fd60a3d49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.590769ms Mar 18 22:16:40.689: INFO: Pod "downwardapi-volume-e94f67b5-8e71-4ed3-831c-d63fd60a3d49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007348666s Mar 18 22:16:42.694: INFO: Pod "downwardapi-volume-e94f67b5-8e71-4ed3-831c-d63fd60a3d49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011553259s STEP: Saw pod success Mar 18 22:16:42.694: INFO: Pod "downwardapi-volume-e94f67b5-8e71-4ed3-831c-d63fd60a3d49" satisfied condition "success or failure" Mar 18 22:16:42.697: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e94f67b5-8e71-4ed3-831c-d63fd60a3d49 container client-container: STEP: delete the pod Mar 18 22:16:42.742: INFO: Waiting for pod downwardapi-volume-e94f67b5-8e71-4ed3-831c-d63fd60a3d49 to disappear Mar 18 22:16:42.752: INFO: Pod downwardapi-volume-e94f67b5-8e71-4ed3-831c-d63fd60a3d49 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:16:42.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6746" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:16:42.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 22:16:42.838: INFO: Create a RollingUpdate DaemonSet Mar 18 22:16:42.841: INFO: Check that daemon pods launch on every node of the cluster Mar 18 22:16:42.849: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:16:42.854: INFO: Number of nodes with available pods: 0 Mar 18 22:16:42.854: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:16:43.904: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:16:43.907: INFO: Number of nodes with available pods: 0 Mar 18 22:16:43.907: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:16:44.888: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:16:44.891: INFO: Number of nodes with available pods: 0 Mar 18 22:16:44.891: INFO: Node jerma-worker is running more than one daemon pod Mar 18 22:16:45.859: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:16:45.863: INFO: Number of nodes with available pods: 1 Mar 18 22:16:45.863: INFO: Node jerma-worker2 is running more than one daemon pod Mar 18 22:16:46.858: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:16:46.861: INFO: Number of nodes with available pods: 2 Mar 18 22:16:46.861: INFO: Number of running nodes: 2, number of available pods: 2 Mar 18 22:16:46.861: INFO: Update the DaemonSet to trigger a rollout Mar 18 22:16:46.867: INFO: Updating DaemonSet daemon-set Mar 18 22:16:49.884: INFO: Roll back the DaemonSet before rollout is complete Mar 18 22:16:49.890: INFO: Updating DaemonSet daemon-set Mar 18 22:16:49.890: INFO: Make sure DaemonSet rollback is complete Mar 18 22:16:49.898: INFO: Wrong image for pod: daemon-set-qklqv. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 18 22:16:49.898: INFO: Pod daemon-set-qklqv is not available Mar 18 22:16:49.904: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:16:50.909: INFO: Wrong image for pod: daemon-set-qklqv. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 18 22:16:50.909: INFO: Pod daemon-set-qklqv is not available Mar 18 22:16:50.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 22:16:51.977: INFO: Pod daemon-set-rpl8z is not available Mar 18 22:16:51.981: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4280, will wait for the garbage collector to delete the pods Mar 18 22:16:52.093: INFO: Deleting DaemonSet.extensions daemon-set took: 19.687397ms Mar 18 22:16:52.193: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.244818ms Mar 18 22:16:54.222: INFO: Number of nodes with available pods: 0 Mar 18 22:16:54.222: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 22:16:54.226: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4280/daemonsets","resourceVersion":"868611"},"items":null} Mar 18 22:16:54.228: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4280/pods","resourceVersion":"868611"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:16:54.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4280" for this suite. • [SLOW TEST:11.472 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":271,"skipped":4335,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:16:54.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 18 22:16:54.375: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 18 22:16:57.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8886 create -f -' Mar 18 22:17:00.046: INFO: stderr: "" Mar 18 22:17:00.046: INFO: stdout: "e2e-test-crd-publish-openapi-5255-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 18 22:17:00.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8886 delete e2e-test-crd-publish-openapi-5255-crds test-cr' Mar 18 22:17:00.181: INFO: stderr: "" Mar 18 22:17:00.181: INFO: stdout: "e2e-test-crd-publish-openapi-5255-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 18 22:17:00.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8886 apply -f -' Mar 18 22:17:00.434: INFO: stderr: "" Mar 18 22:17:00.434: INFO: stdout: "e2e-test-crd-publish-openapi-5255-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 18 22:17:00.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8886 delete e2e-test-crd-publish-openapi-5255-crds test-cr' Mar 18 22:17:00.547: INFO: stderr: "" Mar 18 22:17:00.547: INFO: stdout: "e2e-test-crd-publish-openapi-5255-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 18 22:17:00.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5255-crds' Mar 18 22:17:00.797: INFO: stderr: "" Mar 18 22:17:00.797: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5255-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:17:03.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8886" for this suite. • [SLOW TEST:9.403 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":272,"skipped":4338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:17:03.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 18 22:17:03.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3008' Mar 18 22:17:03.984: INFO: stderr: "" Mar 18 22:17:03.984: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 22:17:03.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3008' Mar 18 22:17:04.094: INFO: stderr: "" Mar 18 22:17:04.094: INFO: stdout: "update-demo-nautilus-mpfnn update-demo-nautilus-mww9g " Mar 18 22:17:04.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpfnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3008' Mar 18 22:17:04.198: INFO: stderr: "" Mar 18 22:17:04.198: INFO: stdout: "" Mar 18 22:17:04.198: INFO: update-demo-nautilus-mpfnn is created but not running Mar 18 22:17:09.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3008' Mar 18 22:17:09.303: INFO: stderr: "" Mar 18 22:17:09.303: INFO: stdout: "update-demo-nautilus-mpfnn update-demo-nautilus-mww9g " Mar 18 22:17:09.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpfnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3008' Mar 18 22:17:09.391: INFO: stderr: "" Mar 18 22:17:09.391: INFO: stdout: "true" Mar 18 22:17:09.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpfnn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3008' Mar 18 22:17:09.483: INFO: stderr: "" Mar 18 22:17:09.483: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 22:17:09.483: INFO: validating pod update-demo-nautilus-mpfnn Mar 18 22:17:09.486: INFO: got data: { "image": "nautilus.jpg" } Mar 18 22:17:09.486: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 22:17:09.486: INFO: update-demo-nautilus-mpfnn is verified up and running Mar 18 22:17:09.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mww9g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3008' Mar 18 22:17:09.579: INFO: stderr: "" Mar 18 22:17:09.579: INFO: stdout: "true" Mar 18 22:17:09.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mww9g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3008' Mar 18 22:17:09.670: INFO: stderr: "" Mar 18 22:17:09.670: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 22:17:09.670: INFO: validating pod update-demo-nautilus-mww9g Mar 18 22:17:09.674: INFO: got data: { "image": "nautilus.jpg" } Mar 18 22:17:09.674: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 22:17:09.674: INFO: update-demo-nautilus-mww9g is verified up and running STEP: rolling-update to new replication controller Mar 18 22:17:09.677: INFO: scanned /root for discovery docs: Mar 18 22:17:09.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3008' Mar 18 22:17:32.327: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 18 22:17:32.327: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 22:17:32.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3008' Mar 18 22:17:32.433: INFO: stderr: "" Mar 18 22:17:32.433: INFO: stdout: "update-demo-kitten-dlfb5 update-demo-kitten-szw9t " Mar 18 22:17:32.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dlfb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3008' Mar 18 22:17:32.534: INFO: stderr: "" Mar 18 22:17:32.534: INFO: stdout: "true" Mar 18 22:17:32.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dlfb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3008' Mar 18 22:17:32.632: INFO: stderr: "" Mar 18 22:17:32.632: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 18 22:17:32.632: INFO: validating pod update-demo-kitten-dlfb5 Mar 18 22:17:32.636: INFO: got data: { "image": "kitten.jpg" } Mar 18 22:17:32.636: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 18 22:17:32.637: INFO: update-demo-kitten-dlfb5 is verified up and running Mar 18 22:17:32.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-szw9t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3008' Mar 18 22:17:32.728: INFO: stderr: "" Mar 18 22:17:32.728: INFO: stdout: "true" Mar 18 22:17:32.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-szw9t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3008' Mar 18 22:17:32.827: INFO: stderr: "" Mar 18 22:17:32.827: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 18 22:17:32.827: INFO: validating pod update-demo-kitten-szw9t Mar 18 22:17:32.832: INFO: got data: { "image": "kitten.jpg" } Mar 18 22:17:32.832: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 18 22:17:32.832: INFO: update-demo-kitten-szw9t is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:17:32.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3008" for this suite. • [SLOW TEST:29.192 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":273,"skipped":4386,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:17:32.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:17:37.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8156" for this suite. • [SLOW TEST:5.102 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":274,"skipped":4392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:17:37.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 18 22:17:38.917: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 18 22:17:40.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166658, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166658, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166658, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720166658, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 18 22:17:44.014: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:17:44.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9612" for this suite. STEP: Destroying namespace "webhook-9612-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.285 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":275,"skipped":4430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:17:44.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0318 22:17:45.234686 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 22:17:45.234: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:17:45.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8375" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":276,"skipped":4477,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:17:45.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 18 22:17:45.331: INFO: Waiting up to 5m0s for pod "client-containers-00a55f28-4e01-4c13-83fc-3d8158a73db4" in namespace "containers-9355" to be "success or failure" Mar 18 22:17:47.589: INFO: Pod "client-containers-00a55f28-4e01-4c13-83fc-3d8158a73db4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258448179s Mar 18 22:17:49.617: INFO: Pod "client-containers-00a55f28-4e01-4c13-83fc-3d8158a73db4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286418756s Mar 18 22:17:51.622: INFO: Pod "client-containers-00a55f28-4e01-4c13-83fc-3d8158a73db4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.291111054s STEP: Saw pod success Mar 18 22:17:51.622: INFO: Pod "client-containers-00a55f28-4e01-4c13-83fc-3d8158a73db4" satisfied condition "success or failure" Mar 18 22:17:51.625: INFO: Trying to get logs from node jerma-worker2 pod client-containers-00a55f28-4e01-4c13-83fc-3d8158a73db4 container test-container: STEP: delete the pod Mar 18 22:17:51.647: INFO: Waiting for pod client-containers-00a55f28-4e01-4c13-83fc-3d8158a73db4 to disappear Mar 18 22:17:51.658: INFO: Pod client-containers-00a55f28-4e01-4c13-83fc-3d8158a73db4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:17:51.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9355" for this suite. • [SLOW TEST:6.422 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 18 22:17:51.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 18 22:17:51.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-711' Mar 18 22:17:51.929: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 22:17:51.930: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Mar 18 22:17:53.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-711' Mar 18 22:17:54.055: INFO: stderr: "" Mar 18 22:17:54.055: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 18 22:17:54.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-711" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":278,"skipped":4565,"failed":0} Mar 18 22:17:54.063: INFO: Running AfterSuite actions on all nodes Mar 18 22:17:54.063: INFO: Running AfterSuite actions on node 1 Mar 18 22:17:54.063: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0} Ran 278 of 4843 Specs in 4294.215 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS